Here's a quick summary of the main things that are going to happen in
Distutils, and Distribute, and a few words on virtualenv and pip.
(there is much much more work going on, but I don't want to drown
people with details)
= Distutils =
Distutils is a package manager and competes with OS package managers.
This is a good thing because, unless you are developing a library or
an application that will only run one specific system that has its own
packaging system like Debian, you will be able to reach much more
people. Of course the goal is to avoid making the work of a Debian
packager (or any other OS that has a package manager) too hard. In
other words, re-packaging a Distutils-based project should be easy and
Distutils should not get in their way (or as less as possible).
But right now Distutils is incomplete in many ways and we are trying to fix'em.
== What's installed ? what's the installation format ? how to uninstall ? ==
First, it's an incomplete package manager : you can install a
distribution using it, but there's no way to list installed
distributions. Worst, you can't uninstall a distribution.
PEP 376 resolves this, and once it's finished, the goal is to include
the APIs described there into Distutils itself and into the pkgutil
module in stdlib. Notice that there's an implementation at
http://bitbucket.org/tarek/pep376 that is kept up to date with PEP 376
so people can see what we are talking about.
Another problem that popped during the last years is the fact that, in
the same site-packages, depending on the tool that was used to install
a Distribution, and depending if this distribution uses Distutils or
Setuptools, you can have different installation formats.
End-users end up with zipped eggs (one file), unzipped eggs (one
self-contained format in a directory) and regular Distutils (packages
and modules in site-packages). And the Metadata are also located in
many different places depending on the installation format used.
That can't be. there's no point to keep various installation format in
the *same* site-packages directory.
PEP 376 also resolves this by describing a *unique* format that works
for all. Once this is finished, Distutils will implement it by
changing the install command accordingly.
- Work left to do in PEP 376 : restrict its scope to a disk-based,
file-based site-packages.
- Goal: 2.7 / 3.2
== Dependencies ==
The other feature that makes a packaging system nice is dependencies.
e.g. a way to list in a distribution, the distributions it requires to
run. As a matter of fact, PEP 314 has introduced in the Metadata new
fields for this purpose ("Requires", "Provides and "Obsoletes"). So,
you can write things like "Requires: lxml >= 2.2.1", meaning that your
distribution requires lxml 2.2.1 or a newer version to run. But this
was just description fields and Distutils was not providing any
feature based on these new fields.
In fact, no third-party tool either provided a feature based on those
fields. Setuptools provided "easy_install" a script that looks for the
dependencies and install them, by querying the Python Package Index
(PyPI). But this feature was implemented with its own metadata: you
can add an "install_requires" option in the setup() call in setup.py,
and it will end up in a "requires.txt" file at installation time that
is located alongside the Metadata for you distribution.
So the goal is to review PEP 314 and update the Metadata w.r.t. the
setuptools feedback and community usage. Once it's done, Distutils
will implement this new metadata version and promote its usage.
Promoting its usage means that Distutils will provide some APIs to
work with these APIs, like a version comparison algorithm.
And while we're at it, we need to work out some inconsistency with the
"Author" and "Maintainer" fields. (The latter doesn't exists in the
Metadata but exists on setup.py side).
- Work left to do in PEP 314 : finish PEP 386, finish the discussion
on the "maintainer" field.
- Goal: 2.7 / 3.2
== Version comparison ==
Once you provide dependency fields in the metadata, you need to
provide a version scheme: a way to compare two versions. Distutils has
two version comparison algorithms that are not used in its code and in
only one place in the stdlib where it could be removed with no pain.
One version scheme is "strict" and one is "loose". And Setuptools has
another one, which is more heuristic (it will deal with any version
string and compare it, wether it's wrong or not).
PEP 386 goal is to describe a version scheme that can be used by all
and if we can meet a consensus there, we can move on and add
it as a reference in the update done in PEP 314, besides the
dependencies fields. Then, in Distutils we can deprecate the existing
version
comparison algorithms and provide a new one based on PEP 386 and
promote its usage.
One very important point: we will not force the community to use the
scheme described in PEP 386, but *there is* already a
de-facto convention on version schemes at PyPI if you use Pip or
easy_install, so let's have a documented standard for this,
and a reference implementation in Distutils.
There's an implementation at
http://bitbucket.org/tarek/distutilsversion that is kept up-to-date
with PEP 386.
- Work left to do in PEP 386 : another round with the community
- Goal: 2.7 / 3.2
== The fate of setup.py, and static metadata ==
Setup.py is a CLI to create distribution, install them etc. You can
also use it to retrieve the metadata of a distribution. For
example you can call "python setup.py --name" and the name will be
displayed. That's fine. That's great for developers.
But there's a major flaw: it's Python code. It's a problem because,
depending on the complexity of this file, an OS packager that just
wants to get the metadata for the platform he's working on, will run
arbitrary code that mught do unwanted things (or even that light not
work)
So we are going to separate the metadata description from setup.py, in
a static configuration file, that can be open and read by anyone
without
running any code. The only problem with this is the fact that some
metadata fields might depend on the execution environment. For
instance, once "Requires" is re-defined and re-introduced via PEP 314,
we will have cases where "pywin32" will be a dependency to have only
on win32 systems.
So we've worked on that lately in Distutils-SIG and came up with a
micro-language, based on a ConfigParser file, that allows
writing metadata fields that depends on sys.platform etc. I won't
detail the syntax here but the idea is that the interpretation
of this file can be done with a vanilla Python without running arbitrary code.
In other words : we will be able to get the metadata for a
distribution without having to install it or to run any setup.py
command.
One use case is the ability to list all dependencies a distribution
requires for a given platform, just by querying PyPI.
So I am adding this in Distutils for 2.7.
Of course setup.py stays, and this is backward compatible.
- Work left to do : publish the final syntax, and do the implementation
- Goal: 2.7 / 3.2
== The fate of bdist_* commands ==
During last Pycon summit we said that we would remove commands like
bdist_rpm because Python is unable, due to its release cycle,
to do a good work there. Here's an example: I have from time to time
cryptic issues in the issue tracker from people from Fedora (or any
rpm-based system), and I have all the pain in the world for these very
specific problems to do the proper fix unless some RPM expert helps
around. And by the time it's detected then fixed, it can be year(s)
before it's available on their side. That's why, depending on the
communities, commands like bdist_rpm are just totally ignored, and OS
packager have their own tools.
So the best way to handle this is to ask these communities to build
their own tool and to encourage them to use Distutils as a basis for
that.
This does not concern bdist_* commands for win32 because those are
very stable and don't change too much: Windows doesn't have a package
manager that would require these commands to evolve with it.
Anyways, when we said that we would remove bdist_rpm, this was very
controversial because some people use it and love it.
So what is going to happen is a status-quo: no bdist_* command will be
removed but no new bdist_* command wil be added. That's why I've
encouraged Andrew and Garry, that are working on a bdist_deb command,
to keep it in the "stdeb" project, and eventually we will
refer to it in the Distutils documentation if this bdist_deb comply
with Distutils standard. It doesn't right now because it uses a
custom version of the Distribution class (through Setuptools) that
doesn't behave like Distutils' one anymore.
For Distutils, I'll add some documentation explaining this, and a
section that will list community-driven commands.
- Work left to do : write the documentation
- Goal: 2.7 / 3.2
= Distribute =
I won't explain here again why we have forked, I think it's obvious to
anyone here now. I'll rather explain what
we are planning in Distribute and how it will interact with Distutils.
Distribute has two branches:
- 0.6.x : provides a Setuptools-0.6c9 compatible version
- 0.7.x : will provide a refactoring
== 0.6.x ==
Not "much" is going to happen here, we want this branch to be helpful
to the community *today* by addressing the 40-or-so bugs
that were found in Setuptools and never fixed. This is eventually
happen soon because its development is
fast : there are up to 5 commiters that are working on it very often
(and the number grows weekly.)
The biggest issue with this branch is that it is providing the same
packages and modules setuptools does, and this
requires some bootstrapping work where we make sure once Distribute is
installed, all Distribution that requires Setuptools
will continue to work. This is done by faking the metadata of
Setuptools 0.6c9. That's the only way we found to do this.
There's one major thing though: thanks to the work of Lennart, Alex,
Martin, this branch supports Python 3,
which is great to have to speed up Py3 adoption.
The goal of the 0.6.x is to remove as much bugs as we can, and try if
possible to remove the patches done
on Distutils. We will support 0.6.x maintenance for years and we will
promote its usage everywhere instead of
Setuptools.
Some new commands are added there, when they are helpful and don't
interact with the rest. I am thinking
about "upload_docs" that let you upload documentation to PyPI. The
goal is to move it to Distutils
at some point, if the documentation feature of PyPI stays and starts to be used.
== 0.7.x ==
We've started to refactor Distribute with this roadmap in mind (and
no, as someone said, it's not vaporware,
we've done a lot already)
- 0.7.x can be installed and used with 0.6.x
- easy_install is going to be deprecated ! use Pip !
- the version system will be deprecated, in favor of the one in Distutils
- no more Distutils monkey-patch that happens once you use the code
(things like 'from distutils import cmd; cmd.Command = CustomCommand')
- no more custom site.py (that is: if something misses in Python's
site.py we'll add it there instead of patching it)
- no more namespaced packages system, if PEP 381 (namespaces package
support) makes it to 2.7
- The code is splitted in many packages and might be distributed under
several distributions.
- distribute.resources: that's the old pkg_resources, but
reorganized in clean, pep-8 modules. This package will
only contain the query APIs and will focus on being PEP 376
compatible. We will promote its usage and see if Pip wants
to use it as a basis. And maybe PyPM once it's open source ?
(<hint> <hint>).
It will probably shrink a lot though, once the stdlib provides PEP 376 support.
- distribute.entrypoints: that's the old pkg_resources entry points
system, but on its own. it uses distribute.resources
- distribute.index: that's package_index and a few other things.
everything required to interact with PyPI. We will promote
its usage and see if Pip wants to use it as a basis.
- distribute.core (might be renamed to main): that's everything
else, and uses the other packages.
Goal: A first release before (or when) Python 2.7 / 3.2 is out.
= Virtualenv and the multiple version support in Distribute =
(I am not saying "We" here because this part was not discussed yet
with everyone)
Virtualenv allows you to create an isolated environment to install
some distribution without polluting the
main site-packages, a bit like a user site-packages.
My opinion is that this tool exists only because Python doesn't
support the installation of multiple versions for the same
distributions.
But if PEP 376 and PEP 386 support are added in Python, we're not far
from being able to provide multiple version support with
the help of importlib.
Setuptools provided a multiple version support but I don't like its
implementation and the way its works.
I would like to create a new site-packages format that can contains
several versions of the same distribution, and :
- a special import system using importlib that would automatically
pick the latest version, thanks to PEP 376.
- an API to force at runtime a specific version (that would be located
at the beginning of all imports, like __future__)
- a layout that is compatible with the way OS packagers works with
python packages
Goal: a prototype asap (one was started under the "VSP" name (virtual
site-packages) but not finished yet)
Regards
Tarek
--
Tarek Ziadé | http://ziade.org | オープンソースはすごい! | 开源传万世,因有你参与
Hi
Trusted Computing (TC) is a technology developed and promoted by the Trusted
Computing Group (TCG)[3]. So, basically the group came up with these chips
called TPM chips which are present on most motherboards nowadays. The main
purpose of it is to enhance security so that infected executables don't run.
It also provides memory curtaining such that cryptographic keys won't be
accessible and many other features. There was a criticism on this from the
FOSS community as well that it enables DRM. No wonder, it is being pushed by
Intel, Microsoft, AMD, etc.. But personally I think its a good idea from
security point of view.
So, currently there is an TSS (TCG Software Stack)[1] API written in C. And
TrustedJava[2] is a project which ported it to Java and is going to be
included in the standard API of Java soon. They have 2 versions of it. One
is a simple wrapper on top of the API and the other is a whole
implementation of the stack in Java.
My proposal is we create an API for it in python.
*Reason*: I am a developer in Umit and I think Python is a very good
platform for developing applications. So, why not create an API which helps
in developing secure applications?
I would love to learn more and provide you with any more information. Please
let me know what you guys think of it?
Thanks in advance
Cheers
Abhiram
[1]
http://www.trustedcomputinggroup.org/resources/tcg_software_stack_tss_speci…
[2] http://trustedjava.sourceforge.net/index.php?item=jtss/about
[3] http://www.trustedcomputinggroup.org/
According to http://docs.python.org/reference/datamodel.html , the
reflected operands functions like __radd__ "are only called if the
left operand does not support the corresponding operation and the
operands are of different types. [3] For instance, to evaluate the
expression x - y, where y is an instance of a class that has an
__rsub__() method, y.__rsub__(x) is called if x.__sub__(y) returns
NotImplemented."
Consider the following simple example:
==========================
class Quantity(object):
def __add__(self, other):
return '__add__ called'
def __radd__(self, other):
return '__radd__ called'
class UnitQuantity(Quantity):
def __add__(self, other):
return '__add__ called'
def __radd__(self, other):
return '__radd__ called'
print 'Quantity()+Quantity()', Quantity()+Quantity()
print 'UnitQuantity()+UnitQuantity()', UnitQuantity()+UnitQuantity()
print 'UnitQuantity()+Quantity()', UnitQuantity()+Quantity()
print 'Quantity()+UnitQuantity()', Quantity()+UnitQuantity()
==========================
The output should indicate that __add__ was called in all four trials,
but the last trial calls __radd__. Interestingly, if I comment out the
definition of __radd__ in UnitQuantity, then the fourth trial calls
__add__ like it should.
I think this may be an important bug. I'm running Python 2.6.4rc1
(r264rc1:75270, Oct 13 2009, 17:02:06) an ubuntu Karmic. Is it a known
issue, or am I misreading the documentation?
Thanks,
Darren
A little while ago, I posted here a suggestion about a new way to configure
logging, using dictionaries. This received some positive and no negative
feedback, so I have thought some more about the details of how it might work. I
present below the results of that thinking, in a PEP-style format. I don't know
if an actual PEP is required for a change of this type, but I felt that it's
still worth going through the exercise to try and achieve a reasonable level of
rigour. (I hope I've succeeded.)
I would welcome all your feedback on this proposal. If I hear no negative
feedback, I propose to implement this feature as suggested.
I thought about posting this on comp.lang.python as well, but possibly it's a
little too much information for most of the folks there. I think it would be
useful to get feedback from the wider community, though, and welcome any
suggestions on how best to achieve this.
Thanks and regards,
Vinay Sajip
-----------
PEP: XXX
Title: Dictionary-Based Configuration For Logging
Version: $Revision$
Last-Modified: $Date$
Author: Vinay Sajip <vinay_sajip at red-dove.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 15-Oct-2009
Python-Version: 2.7 and 3.2
Post-History:
Abstract
========
This PEP describes a new way of configuring logging using a dictionary to hold
configuration information.
Rationale
=========
The present means for configuring Python's logging package is either by using
the logging API to configure logging programmatically, or else by means of
ConfigParser-based configuration files.
Programmatic configuration, while offering maximal control, fixes the
configuration in Python code. This does not facilitate changing it easily at
runtime, and, as a result, the ability to flexibly turn the verbosity of
logging up and down for different parts of a using application is lost. This
limits the usability of logging as an aid to diagnosing problems - and
sometimes, logging is the only diagnostic aid available in production
environments.
The ConfigParser-based configuration system is usable, but does not allow its
users to configure all aspects of the logging package. For example, Filters
cannot be configured using this system. Furthermore, the ConfigParser format
appears to engender dislike (sometimes strong dislike) in some quarters.
Though it was chosen because it was the only configuration format supported in
the Python standard at that time, many people regard it (or perhaps just the
particular schema chosen for logging's configuration) as 'crufty' or 'ugly',
in some cases apparently on purely aesthetic grounds.
Recent versions of Python inlude JSON support in the standard library, and
this is also usable as a configuration format. In other environments, such as
Google App Engine, YAML is used to configure applications, and usually the
configuration of logging would be considered an integral part of the
application configuration. Although the standard library does not contain
YAML support at present, support for both JSON and YAML can be provided in a
common way because both of these serialization formats allow deserialization
of Python dictionaries.
By providing a way to configure logging by passing the configuration in a
dictionary, logging will be easier to configure not only for users of JSON
and/or YAML, but also for users of bespoke configuration methods, by providing
a common format in which to describe the desired configuration.
Another drawback of the current ConfigParser-based configuration system is
that it does not support incremental configuration: a new configuration
completely replaces the existing configuration. Although full flexibility for
incremental configuration is difficult to provide in a multi-threaded
environment, the new configuration mechanism will allow the provision of
limited support for incremental configuration.
Specification
=============
The specification consists of two parts: the API and the format of the
dictionary used to convey configuration information (i.e. the schema to which
it must conform).
Naming
------
Historically, the logging package has not been PEP-8 conformant. At some
future time, this will be corrected by changing method and function names in
the package in order to conform with PEP-8. However, in the interests of
uniformity, the proposed additions to the API use a naming scheme which is
consistent with the present scheme used by logging.
API
---
The logging.config module will have the following additions:
* A class, called ``DictConfigurator``, whose constructor is passed the
dictionary used for configuration, and which has a ``configure()`` method.
* A callable, called ``dictConfigClass``, which will (by default) be set to
``DictConfigurator``. This is provided so that if desired,
``DictConfigurator`` can be replaced with a suitable user-defined
implementation.
* A function, called ``dictConfig()``, which takes a single argument - the
dictionary holding the configuration. This function will call
``dictConfigClass`` passing the specified dictionary, and then call the
``configure()`` method on the returned object to actually put the
configuration into effect::
def dictConfig(config):
dictConfigClass(config).configure()
Dictionary Schema - Overview
----------------------------
Before describing the schema in detail, it is worth saying a few words about
object connections, support for user-defined objects and access to external
objects.
Object connections
''''''''''''''''''
The schema is intended to describe a set of logging objects - loggers,
handlers, formatters, filters - which are connected to each other in an
object graph. Thus, the schema needs to represent connections between the
objects. For example, say that, once configured, a particular logger has an
attached to it a particular handler. For the purposes of this discussion,
we can say that the logger represents the source, and the handler the
destination, of a connection between the two. Of course in the configured
objects this is represented by the logger holding a reference to the
handler. In the configuration dict, this is done by giving each destination
object an id which identifies it unambiguously, and then using the id in the
source object's configuration to indicate that a connection exists between
the source and the destination object with that id.
So, for example, consider the following YAML snippet::
handers:
h1: #This is an id
# configuration of handler with id h1 goes here
h2: #This is another id
# configuration of handler with id h2 goes here
loggers:
foo.bar.baz:
# other configuration for logger "foo.bar.baz"
handlers: [h1, h2]
(Note: YAML will be used in this document as it is more readable than the
equivalent Python source form for the dictionary.)
The ids for loggers are the logger names which would be used
programmatically to obtain a reference to those loggers, e.g.
``foo.bar.baz``. The ids for other objects can be any string value (such as
``h1``, ``h2`` above) and they are transient, in that they are only
meaningful for processing the configuration dictionary and used to
determine connections between objects, and are not persisted anywhere when
the configuration call is complete.
The above snippet indicates that logger named ``foo.bar.baz`` should have
two handlers attached to it, which are described by the handler ids ``h1``
and ``h2``.
User-defined objects
''''''''''''''''''''
The schema should support user-defined objects for handlers, filters and
formatters. (Loggers do not need to have different types for different
instances, so there is no support - in the configuration - for user-defined
logger classes.)
Objects to be configured will typically be described by dictionaries which
detail their configuration. In some places, the logging system will be able
to infer from the context how an object is to be instantiated, but when a
user-defined object is to be instantiated, the system will not know how to do
this. In order to provide complete flexibility for user-defined object
instantiation, the user will need to provide a 'factory' - a callable which
is called with a configuration dictionary and which returns the instantiated
object. This will be signalled by the factory being made available under
the special key ``'()'``. Here's a concrete example::
formatters:
brief:
format: '%(message)s'
default:
format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
custom:
(): my.package.customFormatterFactory
bar: baz
spam: 99.9
answer: 42
The above YAML snippet defines three formatters. The first, with id
``brief``, is a standard ``logging.Formatter`` instance with the
specified format string. The second, with id ``default``, has a longer
format and also defines the time format explicitly, and will result in a
``logging.Formatter`` initialized with those two format strings. Shown in
Python source form, the ``brief`` and ``default`` formatters have
have configuration sub-dictionaries::
{
'format' : '%(message)s'
}
and::
{
'format' : '%(asctime)s %(levelname)-8s %(name)-15s %(message)s',
'datefmt' : '%Y-%m-%d %H:%M:%S'
}
respectively, and as these dictionaries do not contain the special key
``'()'``, the instantiation is inferred from the context: as a result,
standard ``logging.Formatter`` instances are created. The configuration
sub-dictionary for the third formatter, with id ``custom``, is::
{
'()' : 'my.package.customFormatterFactory',
'bar' : 'baz',
'spam' : 99.9,
'answer' : 42
}
and this contains the special key ``'()'``, which means that user-defined
instantiation is wanted. In this case, the specified factory callable will be
located using normal import mechanisms and called with the *remaining* items
in the configuration sub-dictionary as keyword arguments. In the above
example, the formatter with id ``custom`` will be assumed to be returned by
the call::
my.package.customFormatterFactory(bar="baz", spam=99.9, answer=42)
The key ``'()'`` has been used as the special key because it is not a valid
keyword parameter name, and so will not clash with the names of the keyword
arguments used in the call. The ``'()'`` also serves as a mnemonic that the
corresponding value is a callable.
Access to external objects
''''''''''''''''''''''''''
There are times where a configuration will need to refer to objects external
to the configuration, for example ``sys.stderr``. If the configuration dict
is constructed using Python code then this is straightforward, but a problem
arises when the configuration is provided via a text file (e.g. JSON, YAML).
In a text file, there is no standard way to distinguish ``sys.stderr`` from
the literal string ``'sys.stderr'``. To facilitate this distinction, the
configuration system will look for certain special prefixes in string values
and treat them specially. For example, if the literal string
``'ext://sys.stderr'`` is provided as a value in the configuration, then the
``ext://`` will be stripped off and the remainder of the value processed using
normal import mechanisms.
The handling of such prefixes will be done in a way analogous to protocol
handling: there will be a generic mechanism to look for prefixes which match
the regular expression ``^(?P<prefix>[a-z]+)://(?P<suffix>.*)$`` whereby, if
the ``prefix`` is recognised, the ``suffix`` is processed in a prefix-
dependent manner and the result of the processing replaces the string value.
If the prefix is not recognised, then the string value will be left as-is.
The implementation will provide for a set of standard prefixes such as
``ext://`` but it will be possible to disable the mechanism completely or
provide additional or different prefixes for special handling.
Dictionary Schema - Detail
--------------------------
The dictionary passed to ``dictConfig()`` must contain the following keys:
* `version` - to be set to an integer value representing the schema
version. The only valid value at present is 1, but having this key allows
the schema to evolve while still preserving backwards compatibility.
All other keys are optional, but if present they will be interpreted as described
below. In all cases below where a 'configuring dict' is mentioned, it will be
checked for the special ``'()'`` key to see if a custom instantiation is
required. If so, the mechanism described above is used to instantiate;
otherwise, the context is used to determine how to instantiate.
* `formatters` - the corresponding value will be a dict in which each key is
a formatter id and each value is a dict describing how to configure the
corresponding Formatter instance.
The configuring dict is searched for keys ``format`` and ``datefmt`` (with
defaults of ``None``) and these are used to construct a
``logging.Formatter`` instance.
* `filters` - the corresponding value will be a dict in which each key is
a filter id and each value is a dict describing how to configure the
corresponding Filter instance.
The configuring dict is searched for key ``name`` (defaulting to the empty
string) and this is used to construct a ``logging.Filter`` instance.
* `handlers` - the corresponding value will be a dict in which each key is
a handler id and each value is a dict describing how to configure the
corresponding Handler instance.
The configuring dict is searched for the following keys:
* ``class`` (mandatory). This is the fully qualified name of the handler
class.
* ``level`` (optional). The level of the handler.
* ``formatter`` (optional). The id of the formatter for this handler.
* ``filters`` (optional). A list of ids of the filters for this handler.
All *other* keys are passed through as keyword arguments to the handler's
constructor. For example, given the snippet::
handlers:
console:
class : logging.StreamHandler
formatter: brief
level : INFO
filters: [allow_foo]
stream : ext://sys.stdout
file:
class : logging.handlers.RotatingFileHandler
formatter: precise
filename: logconfig.log
maxBytes: 1024
backupCount: 3
the handler with id ``console`` is instantiated as a
``logging.StreamHandler``, using ``sys.stdout`` as the underlying stream.
The handler with id ``file`` is instantiated as a
``logging.handlers.RotatingFileHandler`` with the keyword arguments
``filename="logconfig.log", maxBytes=1024, backupCount=3``.
* `loggers` - the corresponding value will be a dict in which each key is
a logger name and each value is a dict describing how to configure the
corresponding Logger instance.
The configuring dict is searched for the following keys:
* ``level`` (optional). The level of the logger.
* ``propagate`` (optional). The propagation setting of the logger.
* ``filters`` (optional). A list of ids of the filters for this logger.
* ``handlers`` (optional). A list of ids of the handlers for this logger.
The specified loggers will be configured according to the level,
propagation, filters and handlers specified.
* `root` - this will be the configuration for the root logger. Processing of
the configuration will be as for any logger, except that the ``propagate``
setting will not be applicable.
* `incremental` - whether the configuration is to be interpreted as
incremental to the existing configuration. This value defaults to False,
which means that the specified configuration replaces the existing
configuration with the same semantics as used by the existing
``fileConfig()`` API.
If the specified value is True, the configuration is processed as described
in the section on "Incremental Configuration", below.
A Working Example
-----------------
The following is an actual working configuration in YAML format (except that
the email addresses are bogus)::
formatters:
brief:
format: '%(levelname)-8s: %(name)-15s: %(message)s'
precise:
format: '%(asctime)s %(name)-15s %(levelname)-8s %(message)s'
filters:
allow_foo:
name: foo
handlers:
console:
class : logging.StreamHandler
formatter: brief
level : INFO
stream : ext://sys.stdout
filters: [allow_foo]
file:
class : logging.handlers.RotatingFileHandler
formatter: precise
filename: logconfig.log
maxBytes: 1024
backupCount: 3
debugfile:
class : logging.FileHandler
formatter: precise
filename: logconfig-detail.log
mode: a
email:
class: logging.handlers.SMTPHandler
mailhost: localhost
fromaddr: my_app(a)domain.tld
toaddrs:
- support_team(a)domain.tld
- dev_team(a)domain.tld
subject: Houston, we have a problem.
loggers:
foo:
level : ERROR
handlers: [debugfile]
spam:
level : CRITICAL
handlers: [debugfile]
propagate: no
bar.baz:
level: WARNING
root:
level : DEBUG
handlers : [console, file]
Incremental Configuration
=========================
It is difficult to provide complete flexibility for incremental configuration.
For example, because objects such as handlers, filters and formatters are
anonymous, once a configuration is set up, it is not possible to refer to such
anonymous objects when augmenting a configuration. For example, if an initial
call is made to configure the system where logger ``foo`` has a handler with
id ``console`` attached, then a subsequent call to configure a logger ``bar``
with id ``console`` would create a new handler instance, as the id ``console``
from the first call isn't kept.
Furthermore, there is not a compelling case for arbitrarily altering the
object graph of loggers, handlers, filters, formatters at run-time, once a
configuration is set up; the verbosity of loggers can be controlled just by
setting levels (and perhaps propagation flags).
Thus, when the ``incremental`` key of a configuration dict is present and
is ``True``, the system will ignore the ``formatters``, ``filters``,
``handlers`` entries completely, and process only the ``level`` and
``propagate`` settings in the ``loggers`` and ``root`` entries.
Configuration Errors
====================
If an error is encountered during configuration, the system will raise a
``ValueError`` or a ``TypeError`` with a suitably descriptive message. The
following is a (possibly incomplete) list of conditions which will raise an
error:
* A ``level`` which is not a string or which is a string not corresponding to
an actual logging level
* A ``propagate`` value which is not a Boolean
* An id which does not have a corresponding destination
* An invalid logger name
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:
> > Do the users get any say in this?
>
> I'm a user! :-)
>
> I hate calling methods on string literals, I think it looks very odd
> to have code like this:
>
> "Displaying {0} of {1} revisions".format(x, y)
Ugh! Good point.
Is Python to be an easy-to-learn-and-remember language? I submit we are losing that one. To a user, this will be confusing. To a C programmer coming over to Python, especially so. Some of what makes %-formatting easy to remember is its parallel in C.
I'm conflicted. Philosophically I like the idea of mnemonic names over positional variables and allowing variable values determined elsewhere to be inserted in print strings. It is appealing.
Unless the benefit is at least 2x, a change should not be made, and I don't think this benefit rises to where it is worth the confusion and problems. ...and converting the legacy base. And forget pretty, not that %-formatting is pretty either. Besides, according to the bench, it is slower too. And it will take editors a while before the new syntax is supported and colorized, thus some errors for a while.
...and if one wants a "{" or a "}" in the printed output, one has to escape it? That is -2x over wanting a "%" in the output.
So until I see a *significant* benefit, my vote is *not* remove %-formatting. Make both available and if {} is to win, it will.
Hello,
It turns out (*) that even in py3k, not all modules are PY_SSIZE_T_CLEAN.
Should we try to remedy that, and make PY_SSIZE_T_CLEAN the default in future
3.x versions?
As far I know, there's no reason not to be PY_SSIZE_T_CLEAN, except for having
to convert old code.
(*) http://bugs.python.org/issue7080
Regards
Antoine.
PS : no, I'm not volunteering to do it all myself :)
I've written a small Tkinter-Script, that crashes Python 3.1 (but not
Python 2.6) without any specific rrror message. When started from within
IDLE, the failing of the script also closes IDLE. (This is the case
under Windows, Mac OS X and Linux (teted with Ubuntu 9.04))
Bug-Tracker Issue 6717
The script is attached to this issue
I think, errors like this should definitely not occur. Instead a message
like "recusion depth exceeded" should be displayed (and IDLE should
remain functional, if used)
Since I do not have the means availble to track down this bug, I'd like
to draw your attention to it and to ask if someone else has the means
and time to do so.
I'd also suggest to increase the priority of this bug in the bugtracker.
Regards,
Gregor
The current shutdown code in pythonrun.c zaps module globals by
setting them to None (an attempt to break reference cycles). That
causes problems since __del__ methods can try to use the globals
after they have been set to None.
The procedure implemented by http://bugs.python.org/issue812369
seems to be a better idea. References to modules are replaced by
weak references and the GC is allowed to cleanup reference cycles.
I would like to commit this change but I'm not sure if it is a good
time in the development cycle or if anyone has objections to the
idea. Please speak up if you have input.
Neil
I notice that WITHOUT_COMPLEX still appears in Python.h and several .c files
but nowhere else in the 2.6, 2.7 or 3.1 source, most particularly not in
configure or pyconfig.h.in. Are builds --without-complex still supported?
Has it been tested at any time in the recent past?
Skip