The Python Software Foundation's Infrastructure committee has been charged
with finding a new tracker system to be used by the Python development team
as a replacement for SourceForge. The development team is currently unhappy
with SF for several reasons which include:
* Bad interface
Most obvious example is the "Check to Upload" button
* Lack of reliability
SF has been known to go down during the day unexpectedly and stay down
for hours
* Lack of workflow controls
For instance, you cannot delete a category once created
For these reasons and others, we are requesting the Python community help us
find a new tracker to use. We are asking for test trackers to be set up to
allow us to test them to see which tracker we want to move the Python
development team to. This is in order to allow the Infrastructure committee
to evaluate the various trackers to see which one meets our tracker needs
the best.
Because we are not sure exactly what are requirements for a tracker are we
do not have a comprehensive requirements document. But we do have a short
list of bare minimum needs:
* Can import SF data
http://effbot.org/zone/sandbox-sourceforge.htm contains instructions on
how to access the data dump and work with the support tools (graciously
developed by Fredrik Lundh)
* Can export data
To prevent the need to develop our own tools to get our data out of the
next tracker, there must be a way to get a dump of the data (formatted or
raw) that includes *all* information
* Has an email interface
To facilitate participation in tracker item discussions, an email
interface is required to lower the barrier to add comments, files, etc.
If there is a tracker you wish to propose for Python development team use,
these are the steps you must follow:
* Install a test tracker
If you do not have the server resources needed, you may contact the
Infrastructure committee at infrastructure at python.org, but our resources
are limited by both machine and manpower, so *please* do what you can to use
your own servers; we do not expect you to provide hosting for the final
installation of the tracker for use by python-dev, though, if your tracker
is chosen
* Import the SF data dump
http://effbot.org/zone/sandbox-sourceforge.htm
* Make the Infrastructure committee members administrators of the tracker
A list of the committee members can be found at
http://wiki.python.org/moin/PythonSoftwareFoundationCommittees#infrastructu…
* Add your tracker to the wiki page at
http://wiki.python.org/moin/CallForTrackers
This includes specifying the contact information for a *single* lead
person to contact for any questions about the tracker; this is to keep
communication simple and prevent us from having competing installations of
the same tracker software
* Email the Infrastructure committee that your test tracker is up and ready
to be viewed
We will accept new trackers for up to a maximum of two months starting
2006-06-05 (and thus ending 2006-08-07). If trackers cease to be suggested,
we will close acceptance one month after the last tracker proposed (this
means the maximum timeframe for all of this is three months, ending
2006-09-04). This allows us to not have this process carry on for three
months if there is no need for it to thanks to people getting trackers up
quickly.
As the committee evaluates trackers we will add information about what we
like and dislike to the http://wiki.python.org/moin/GoodTrackerFeatures wiki
page so that various trackers and change their setttings and notify us of
such changes. This prevents penalizing trackers that are set up quickly
(which could be taken as a sign of ease of maintenance) compared to trackers
that are set up later but possibly more tailored to what the Infrastructure
committee discovers they want from a tracker.
If you have any questions, feel free to email infrastructure at python.org .
- Brett Cannon
Chairman, Python Software Foundation Infrastructure committee
Hi there,
I'm pleased to announce pkipplib v0.04
This GPLed Python library allows you to create, manage or parse IPP
(Internet Printing Protocol) requests.
In addition, it exposes a CUPS() class which allows one to interact with a
CUPS print server (or an IPP printer).
Written in pure Python, there's no need to link with the CUPS' libraries,
and it doesn't even require any CUPS related software to work.
The mid to long term goal is to support all of the CUPS' IPP API.
Summary of changes :
- Support for HTTP Basic authentication when connecting to a CUPS
server was added.
- General reliability was improved.
To learn more about it, see examples of use, or download it :
http://www.pykota.com/software/pkipplib/
Thank you for reading.
Jerome Alet
I am happy to announce the first beta of the M2Crypto 0.16 release.
Please give these bits a spin and report any problems. I will be making
new betas once a week (or more often if needed) until regressions are
fixed. I expect the final 0.16 bits will be out by the end of June 2006.
Highlights:
- All known memory leaks fixed
- All known regressions fixed
- Added --openssl option to setup.py which can be used to specify
where OpenSSL is installed, by Matt Rodriguez
- ECDSA signatures and ECDH key agreement, requires OpenSSL 0.9.8+,
by Arno Bakker
- Added sha224, sha256, sha384 and sha512, by Larry Bugbee
- Added serialNumber, SN, surname, GN and givenName fields to X509_Name,
by Martin Paljak
- And various other improvements and bugfixes, see CHANGES file
Requirements:
* Python 2.3 or newer
* OpenSSL 0.9.7 or newer
o Some optional new features will require OpenSSL 0.9.8 or newer
* SWIG 1.3.24 or newer
Get it while it's hot from M2Crypto homepage:
http://wiki.osafoundation.org/bin/view/Projects/MeTooCrypto
--
Heikki Toivonen
Hallo everyone,
I have the honour to announce the availability of lxml 1.0.
http://codespeak.net/lxml/
It's downloadable from cheeseshop:
http://cheeseshop.python.org/pypi/lxml
"""
lxml is a Pythonic binding for the libxml2 and libxslt libraries. It provides
safe and convenient access to these libraries using the ElementTree API. It
extends the ElementTree API significantly to offer support for XPath, RelaxNG,
XML Schema, XSLT, C14N and much, much more.
Its goals are:
* Pythonic API.
* Documented.
http://codespeak.net/lxml/#documentation
* FAST!
http://codespeak.net/lxml/performance.html
* Use Python unicode strings in API.
* Safe (no segfaults).
* No manual memory management!
(as opposed to the official libxml2 Python bindings)
"""
While the list of features added since the last beta version (1.0.beta) is
rather small, this version contains a large number of bug fixes found by
various users and testers. Thank you all for your help!
Stefan
Features added since 0.9.2:
* Element.getiterator() and the findall() methods support finding
arbitrary elements from a namespace (pattern {namespace}*)
* Another speedup in tree iteration code
* General speedup of Python Element object creation and deallocation
* Writing C14N no longer serializes in memory (reduced memory footprint)
* PyErrorLog for error logging through the Python logging module
* element.getroottree() returns an ElementTree for the root node of the
document that contains the element.
* ElementTree.getpath(element) returns a simple, absolute XPath expression
to find the element in the tree structure
* Error logs have a last_error attribute for convenience
* Comment texts can be changed through the API
* Formatted output via pretty_print keyword to serialization functions
* XSLT can block access to file system and network via XSLTAccessControl
* ElementTree.write() no longer serializes in memory (reduced memory
footprint)
* Speedup of Element.findall(tag) and Element.getiterator(tag)
* Support for writing the XML representation of Elements and ElementTrees
to Python unicode strings via etree.tounicode()
* Support for writing XSLT results to Python unicode strings via unicode()
* Parsing a unicode string no longer copies the string (reduced memory
footprint)
* Parsing file-like objects now reads chunks rather than the whole file
(reduced memory footprint)
* Parsing StringIO objects from the start avoids copying the string
(reduced memory footprint)
* Read-only 'docinfo' attribute in ElementTree class holds DOCTYPE
information, original encoding and XML version as seen by the parser
* etree module can be compiled without libxslt by commenting out the line
include "xslt.pxi" near the end of the etree.pyx source file
* Better error messages in parser exceptions
* Error reporting now also works in XSLT
* Support for custom document loaders (URI resolvers) in parsers and XSLT,
resolvers are registered at parser level
* Implementation of exslt:regexp for XSLT based on the Python 're' module,
enabled by default, can be switched off with 'regexp=False' keyword
argument
* Support for exslt extensions (libexslt) and libxslt extra functions
(node-set, document, write, output)
* Substantial speedup in XPath.evaluate()
* HTMLParser for parsing (broken) HTML
* XMLDTDID function parses XML into tuple (root node, ID dict) based on
xml:id implementation of libxml2 (as opposed to ET compatible XMLID)
Bugs fixed since 0.9.2:
* Memory leak in Element.__setitem__
* Memory leak in Element.attrib.items() and Element.attrib.values()
* Memory leak in XPath extension functions
* Memory leak in unicode related setup code
* Element now raises ValueError on empty tag names
* Namespace fixing after moving elements between documents could fail if
the source document was freed too early
* Setting namespace-less tag names on namespaced elements ('{ns}t' -> 't')
didn't reset the namespace
* Unknown constants from newer libxml2 versions could raise exceptions in
the error handlers
* lxml.etree compiles much faster
* On libxml2 <= 2.6.22, parsing strings with encoding declaration could
fail in certain cases
* Document reference in ElementTree objects was not updated when the root
element was moved to a different document
* Running absolute XPath expressions on an Element now evaluates against
the root tree
* Evaluating absolute XPath expressions (/*) on an ElementTree could fail
* Crashes when calling XSLT, RelaxNG, etc. with uninitialized ElementTree
objects
* Memory leak when using iconv encoders in tostring/write
* Deep copying Elements and ElementTrees maintains the document
information
* Serialization functions raise LookupError for unknown encodings
* Memory deallocation crash resulting from deep copying elements
* Some ElementTree methods could crash if the root node was not
initialized (neither file nor element passed to the constructor)
* Element/SubElement failed to set attribute namespaces from passed attrib
dictionary
* tostring() now adds an XML declaration for non-ASCII encodings
* tostring() failed to serialize encodings that contain 0-bytes
* ElementTree.xpath() and XPathDocumentEvaluator were not using the
ElementTree root node as reference point
* Calling document('') in XSLT failed to return the stylesheet
ReportLab are proud to announce not one but two major releases of our PDF
document generation framework.
The ReportLab PDF Toolkit lets you generate rich flowing documents in PDF
from dynamic data, complete with multiple columns, tables and charts, at
extremely high speeds; and to generate charts and data graphics in PDF and
bitmap formats. It was first released in mid-2000 and the previous stable
release, 1.20, was in late 2004.
The 2.0 release includes many new features, and works with Unicode or UTF8
input throughout. This simplifies many things but may break old code that
uses non-ASCII input. It should be trivial to upgrade your app using the
Python codecs package, which now includes codecs for most of the world's
languages.
http://www.reportlab.org/whatsnew_2_0.html
We have also produced a 1.21 release with a number of minor enhancements
and bug fixes since 1.20, and with the old character handling behaviour.
This should provide a safe upgrade for all existing users.
http://www.reportlab.org/whatsnew_1_21.html
ReportLab's commercial products (Report Markup Language, Diagra and
PageCatcher) also have their own 2.0 and 1.21 releases and are documented
on http://developer.reportlab.com/index.html. Open source users are
encouraged to review the RML examples and test cases, which provide very
clear examples of what's possible with the underlying Python objects.
Best regards,
John