I've been looking at outstanding unittest issues as part of my
preparation for my PyCon talk. There are a couple of changes (minor) I'd
like to make that I thought I ought to run past Python-Dev first. If I
don't get any responses then I'll just do it, so you have been warned. :-)
The great google merge into unittest happened at PyCon last year .
This included a change to TestCase.shortDescription() so that it would
*always* include the test name, whereas previously it would return the
test docstring or None.
The problem this change solved was that tests with a docstring would not
have their name (test class and method name) reported during the test
run. Unfortunately the change broke part of twisted test running.
Reported as issue 7588:
It seems to me that the same effect (always reporting test name) can be
achieved in _TextTestResult.getDescription(). I propose to revert the
change to TestCase.shortDescription() (which has both a horrible name
and a horrible implementation and should probably be renamed
getDocstring so that what it does is obvious but never mind) and put the
change into _TextTestResult.
It annoys me that _TextTestResult is private, as you will almost
certainly want to use it or subclass it when implementing custom test
systems. I am going to rename it TextTestResult, alias the old name and
document the old name as being deprecated.
Another issue that I would like to address, but there are various
possible approaches, is issue 7559: http://bugs.python.org/issue7559
Currently loadTestsFromName catches ImportError and rethrows as
AttributeError. This is horrible (it obscures the original error) but
there are backwards compatibility issues with fixing it. There are three
1) Leave it (the default)
2) Only throw an AttributeError if the import fails due to the name
being invalid (the module not existing) otherwise allow the error
through. (A minor but less serious change in behavior).
3) A new method that turns failures into pseudo-tests that fail with the
original error when run. Possibly deprecating loadTestsFromName
I favour option 3, but can't think of a good replacement name. :-)
Despite deprecating (in the documentation - no actual deprecations
warnings I believe) a lot of the duplicate ways of doing things (assert*
favoured over fail* and assertEqual over assertEquals) we didn't include
deprecating assert_ in favour of assertTrue. I would like to add that to
the documentation. After 3.2 is out I would like to clean up the
documentation, removing mention of the deprecated methods from the
*main* documentation into a separate 'deprecated methods' section. They
currently make the documentation very untidy. The unittest page should
probably be split into several pages anyway and needs improving.
Other outstanding minor issues:
Allow dotted names for test discovery
http://bugs.python.org/issue7780 - I intend to implement this as
described in the last comment
A 'check_order' optional argument (defaulting to True) for
http://bugs.python.org/issue7832 - needs patch
The breaking of __unittest caused by splitting unittesst into a package
needs fixing. The fix needs to work when Python is run without frames
http://bugs.python.org/issue7815 - needs patch
Allow a __unittest (or similar) decorator for user implemented assert
http://bugs.python.org/issue1705520 - needs patch
Allow modules to define test_suite callable.
http://bugs.python.org/issue7501 - I propose to close as rejected. Use
Display time taken by individual tests when in verbose mode.
http://bugs.python.org/issue4080 - anyone any opinions?
Allow automatic formatting of arguments in assert* failure messages.
http://bugs.python.org/issue6966 - I propose to close as rejected
removeTest() method on TestSuite
http://bugs.python.org/issue1778410 - anyone any opinions?
expect methods (delayed fail)
http://bugs.python.org/issue3615 - any opinions? Personally I think that
the TestCase API is big enough already
All the best,
 Mostly in revision 7-837.
READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
On Tue, Feb 2, 2010 at 8:08 PM, Barry Warsaw <barry(a)python.org> wrote:
> I'm thinking about doing a Python 2.6.5 release soon. I've added the
> following dates to the Python release schedule Google calendar:
> 2009-03-01 Python 2.6.5 rc 1
> 2009-03-15 Python 2.6.5 final
> This allows us to spend some time on 2.6.5 at Pycon if we want. Please let me
> know if you have any concerns about those dates.
I've noticed a couple of issues that 100% crash Python 2.6.4 like this
one - http://bugs.python.org/issue6608 Is it ok to release new
versions that are known to crash?
On approximately 1/27/2010 1:08 AM, came the following characters from
the keyboard of Glenn Linderman:
> Without reference to distutils, it seems the pieces are:
> 1) a way to decide what to include in the package
> 2) code that knows where to put what is included, on one or more
> 3) the process to create the ZIP file that includes 1 & 2, and call it
> an appropriate name.py
> 3 looks easy, once 1 & 2 are figured out. distutils might provide the
> foundation for 1. 2 sounds like something a distutils application
> might create. I'm not sure that distutils is in the business of
> building installer programs, my understanding is that it is in the
> business of providing a standard way of recording and interpreting
> bills of materials and maybe dependencies, but that is based only on
> reading discussions here, not reading documentation. I haven't had a
> chance to read all the module documentation since coming to python.
3 was rather easy, and has come in handy for me, as it turned out: see
Glenn -- http://nevcal.com/
A protocol is complete when there is nothing left to remove.
-- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking
To follow up on some of the open issues:
On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter <collinwinter(a)google.com> wrote:
> Open Issues
> - *Code review policy for the ``py3k-jit`` branch.* How does the CPython
> community want us to procede with respect to checkins on the ``py3k-jit``
> branch? Pre-commit reviews? Post-commit reviews?
> Unladen Swallow has enforced pre-commit reviews in our trunk, but we realize
> this may lead to long review/checkin cycles in a purely-volunteer
> organization. We would like a non-Google-affiliated member of the CPython
> development team to review our work for correctness and compatibility, but we
> realize this may not be possible for every commit.
The feedback we've gotten so far is that at most, only larger, more
critical commits should be sent for review, while most commits can
just go into the branch. Is that broadly agreeable to python-dev?
> - *How to link LLVM.* Should we change LLVM to better support shared linking,
> and then use shared linking to link the parts of it we need into CPython?
The consensus has been that we should link shared against LLVM.
Jeffrey Yasskin is now working on this in upstream LLVM. We are
tracking this at
> - *Prioritization of remaining issues.* We would like input from the CPython
> development team on how to prioritize the remaining issues in the Unladen
> Swallow codebase. Some issues like memory usage are obviously critical before
> merger with ``py3k``, but others may fall into a "nice to have" category that
> could be kept for resolution into a future CPython 3.x release.
The big-ticket items here are what we expected: reducing memory usage
and startup time. We also need to improve profiling options, both for
oProfile and cProfile.
> - *Create a C++ style guide.* Should PEP 7 be extended to include C++, or
> should a separate C++ style PEP be created? Unladen Swallow maintains its own
> style guide [#us-styleguide]_, which may serve as a starting point; the
> Unladen Swallow style guide is based on both LLVM's [#llvm-styleguide]_ and
> Google's [#google-styleguide]_ C++ style guides.
Any thoughts on a CPython C++ style guide? My personal preference
would be to extend PEP 7 to cover C++ by taking elements from
http://code.google.com/p/unladen-swallow/wiki/StyleGuide and the LLVM
and Google style guides (which is how we've been developing Unladen
Swallow). If that's broadly agreeable, Jeffrey and I will work on a
patch to PEP 7.
PyCon is coming! Tomorrow (February 10th) is the last day for
pre-conference rates. You can register for PyCon online at:
Register while it is still Feb. 10th somewhere in the world and rest
easy in the knowledge that within 10 days you will enjoying the company
of some of the finest Python hackers in the world.
As an additional bonus, PyCon this year will be in Atlanta, making it an
ideal location for those looking for a way to escape the late winter
blizzards in the northeastern United States, or the dreary fog of the
See you at PyCon 2010!
I've reviewed PEP 345 and PEP 386 and am satisfied that after some
small improvements they will be accepted. Most of the discussion has
already taken place.
I have one comment on PEP 345: Why is author-email mandatory? I'm sure
there are plenty of cases where either the author doesn't want their
email address published, or their last know email address is no longer
valid. (Tarek responded off-line that it isn't all that mandatory; I
propose to say so in the PEP.)
I am also looking at PEP 376 but I expect that Tarek will start
another round of discussion on it. Hopefully all three PEPs will be
accepted in time for inclusion in Python 2.7.
--Guido van Rossum (python.org/~guido)
I have submitted a patch and a test script for issue 4037 on the bug
tracker, "doctest.py should include method descriptors when looking
inside a class __dict__"
I would be grateful if somebody could review it please, and if suitable,
Pascal Chambon writes:
> I don't really get it there... it seems to me that multiprocessing only
> requires picklability for the objects it needs to transfer, i.e those
> given as arguments to the called function, and thsoe put into
> multiprocessing queues/pipes. Global program data needn't be picklable -
> on windows it gets wholly recreated by the child process, from python
> So if you're having pickle errors, it must be because the
> "object_from_module_xyz" itself is *not* picklable, maybe because it
> contains references to unpicklable objets. In such case, properly
> implementing pickle magic methods inside the object should do it,
> shouldn't it ?
I'm also a long time lurker (and in financial software, coincidentally).
Pascal is correct here. We use a number of C++ libraries wrapped via
Boost.Python to do various calculations. The typical function calls return
wrapped C++ types. Boost.Python types are not, unfortunately, pickleable.
For a number of technical reasons, and also unfortunately, we often have to
load these libraries in their own process, but we want to hide this from
our users. We accomplish this by pickling the instance data, but importing
the types fresh when we unpickle, all implemented in the magic pickle
methods. We would lose any information that was dynamically added to
the type in the remote process, but we simply don't do that. We very often
have many unpickleable objects imported somewhere when we spin off our
processes using the multiprocess library, and this does not cause any
Jesse Noller <jnoller <at> gmail.com> writes:
> We already have an implementation that spawns a
> subprocess and then pushes the required state to the child. The
> fundamental need for things to be pickleable *all the time* kinda
> makes it annoying to work with.
This requirement puts a fairly large additional strain on working with
unwieldy, wrapped C++ libraries in a multiprocessing environment.
I'm not very knowledgeable on the internals of the system, but would
it be possible to have some kind of fallback system whereby if an object
fails to pickle we instead send information about how to import it? This
has all kinds of limitations - it only works for importable things (i.e. not
instances), it can potentially lose information dynamically added to the
object, etc., but I thought I would throw the idea out there so someone
knowledgeable can decide if it has any merit.
On Jan 31, 2010, at 01:06 PM, Ron Adam wrote:
>With a single cache directory, we could have an option to force writing
>bytecode to a desired location. That might be useful on it's own for
>creating runtime bytecode only installations for installers.
One important reason for wanting to keep the bytecode cache files colocated
with the source files is that I want to be able to continue to manipulate
$PYTHONPATH to control how Python finds its modules. With a single
system-wide cache directory that won't be easy. E.g. $PYTHONPATH might be
hacked to find the source file you expect, but how would that interact with
how Python finds its cache files? I'm strongly in favor of keeping the cache
files as close to the source they were generated from as possible.