the company where I work has done some work on Python, and the question
is how this work, owned by the company, can be contributed to the
community properly. Are there any license issues or other pitfalls we
need to think about? I imagine that other companies have contributed
before, so this is probably an already solved problem.
2009/12/30 Martin (gzlist) <gzlist(a)googlemail.com>:
> Hi Benjamin,
> In rev 74094 of Python, you split the unittest module up, could you
> point me at any bug entries or discussion over this revision so I can
> catch up?
This was mostly a discussion on IRC between Michael Foord and myself.
> As a side-effect you seem to have changed the method of marking a
> module as not worth including in a traceback to be no longer
> A global was set at the top of the module:
> __unittest = 1
> Which is then checked for when constructing traceback output:
> def _is_relevant_tb_level(self, tb):
> return '__unittest' in tb.tb_frame.f_globals
> def _is_relevant_tb_level(self, tb):
> globs = tb.tb_frame.f_globals
> is_relevant = '__name__' in globs and \
> del globs
> return is_relevant
> Only packages actually named "unittest" can be excluded.
> What is now the prefered method of marking a module as test-internal?
> Overriding the leading-underscore _is_relevant_tb_level method? How
> can this be done cooperatively by different packages?
When I made that change, I didn't know that the __unittest "hack" was
being used elsewhere outside of unittest, so I felt fine replacing it
with another. While I still consider it an implementation detail, I
would be ok with exposing an "official" API for this. Perhaps
> I would have CCed a mailinglist with this question but don't like
> getting yelled at for posting on the wrong one, please feel free to do
> so with your reply if you feel it's appropriate (the CCing, not the
python-dev is perfect for this discussion.
I built the python-2.6.2 with the latest libffi-3.0.9 in AIX 5.3 using xlc
When i try to run the ctypes test cases, two failures are seen in
*test_ints (ctypes.test.test_bitfields.C_Test) ... FAIL
test_shorts (ctypes.test.test_bitfields.C_Test) ... FAIL*
I have attached the full test case result.
If i change the type c_int and c_short to c_unit and c_ushort of class
"BITS(Structure)" in file
test_bitfields.py then no failures are seen.
Has anyone faced the similar issue or any help is appreciated.
Is there a high volume of incoming bugs to the Python tracker?
If so, I'd like to help with triaging. I think I have all the necessary
access, what I'm missing is the knowledge of how to set myself up to get
notifications of new bugs...
How do I do that?
Simplistix - Content Management, Batch Processing & Python Consulting
I think we've reached a consensus on those two PEPs.
Although, there's one last point that was forgotten in the discussions
: I've introduced "rc" in the pre-releases markers, so PEP 386 is
compatible with Python's own version scheme. "rc" comes right after
"c" in the sorting. It's slightly redundant with the "c" marker but I
don't think this really matters as long as consumers know how to order
them (a < b < c < rc). I have also stated that "c" is the preferred
marker for third party projects, from PEP 386 point of view.
Is there anything else I can do to make those two PEPs accepted ?
Tarek Ziadé | http://ziade.org
How about a new python 3 release with (possibly partial) backwards
compatibility with 2.6? I'm a big 3 fan, but I'm dismayed at the way major
software hasn't been ported to it. I'm eager to use 3, but paradoxically,
the 3 release makes me rather stuck with 2.6. Excuse me if this has been
suggested in the past.
So there wasn't really any more feedback on the last post of the
argparse PEP other than a typo fix and another +1.
Can I get a pronouncement? Here's a summary of the responses. (Please
correct me if I misinterpreted anyone.)
* Floris Bruynooghe +1
* Brett Cannon +1
* Nick Coghlan +1
* Michael Foord +1
* Yuval Greenfield +1
* Doug Hellmann +1
* Kevin Jacobs +1
* Paul Moore +1
* Jesse Noller +1
* Fernando Perez +1
* Jon Ribbens +1
* Vinay Sajip +1
* Barry Warsaw +1
* Antoine Pitrou -0
* Martin v. Löwis -0
* M.-A. Lemburg -1
Note that I've interpreted those who were opposed to the deprecation
of getopt as -0 since the PEP no longer proposes that, only the
deprecation of optparse. (People who opposed optparse's deprecation
are still -1.)
If there's any other information that would be helpful for a
pronouncement, please let me know.
Where did you get that preposterous hypothesis?
Did Steve tell you that?
--- The Hiphopopotamus
On behalf of the Distutils-SIG, I would like to propose to addition of
PEP 345 (once and *if* PEP 386 is accepted).
It's the metadata v1.2: http://www.python.org/dev/peps/pep-0345/
PEP 345 was initiated a while ago by Richard Jones, and reworked since
then together with PEP 386, at Pycon last year and in Distutils-SIG.
The major enhancements are:
- being able to express dependencies on other *distributions* names,
rather than packages names or modules names. This
enhancement comes from Setuptools and has been used successfully for
years by the community.
- being able to express some fields which values are specific to some
platforms. For example, being able to define "pywin32"
as a dependency *only* on win32. This enhancement will allow any
tool to query PyPI and to get the metadata for a particular
execution context, without having to download, build, or install the
- being able to provide a list of browsable URLs for the project, like
a bug tracker, a repository etc, in addition to the home url.
This will allow UIs like PyPI to display a list of URLs for a
project. A side-effect will be that a project maintainer will be able
to drive its
end users to the right places when they need to find detailed
documentation or provide some feedback. This enhancement
was driven by the discussions about the rating/comment system at
PyPI on catalog-sig.
We believe that having PEP 386 and PEP 345 accepted will be a major
improvement for the Python packaging eco-system. The next PEP in the
series we are working on is PEP 376.
As a side note, I would really like to see them (hopefully) accepted
before the first beta of Python 2.7 so we can add these features in
2.7/3.2 and start to work on third-party tools (Distribute, Pip, a
standalone version of Distutils for 2.6/2.5, etc..) to get ready to
support them by the time 2.7 final is out.
Tarek Ziadé | http://ziade.org
When I ported gmpy (Python to GMP multiple precision library) to
Python 3.x, I began to use PyLong_AsLongAndOverflow frequently. I
found the code to slightly faster and cleaner than using PyLong_AsLong
and checking for overflow. I looked at making PyLong_AsLongAndOverflow
available to Python 2.x. http://bugs.python.org/issue7528 includes a
patch that adds PyLong_AsLongAndOverflow to Python 2.7.
I also included a file (py3intcompat.c) that can be included with an
extension's source code and provides PyLong_AsLongAndOverflow to
earlier versions of Python 2.x. In the bug report, I suggested that
py3intcompat.c could be included in the Misc directory and be made
available to extension authors. This follows the precedent of
pymemcompat.h. But there may be more "compatibility" files that could
benefit extension authors. Mark Dickinson suggested that I bring the
topic on python-dev.
Several questions come to mind:
1) Is it reasonable to provide backward compatibility files (either as
.h or .c) to provide support to new API calls to extension authors?
2) If yes, should they be included with the Python source or
distributed as a separate entity? (2to3 and/or 3to2 projects, a Wiki
3) If not, and extension authors can create their own compatibility
files, are there any specific attribution or copyright messages that
must be included? (I assuming the compatibility was done by extracting
the code for the new API and tweaking it to run on older versions of
Thanks in advance for your attention,
Case Van Horsen