On Thu, 16 Dec 2010 07:15:02 +0100, eric.araujo <python-checkins(a)python.org> wrote:
> Modified: python/branches/release27-maint/Doc/library/compileall.rst
> --- python/branches/release27-maint/Doc/library/compileall.rst (original)
> +++ python/branches/release27-maint/Doc/library/compileall.rst Thu Dec 16 07:15:02 2010
> @@ -1,4 +1,3 @@
> :mod:`compileall` --- Byte-compile Python libraries
> @@ -50,14 +49,14 @@
> Expand list with its content (file and directory names).
I realize you didn't write this line, but note that '-' is accepted as
an argument and means "read the list from stdin".
> -.. versionadded:: 2.7
> - The ``-i`` option.
> +.. versionchanged:: 2.7
> + Added the ``-i`` option.
> Public functions
> -.. function:: compile_dir(dir[, maxlevels[, ddir[, force[, rx[, quiet]]]]])
> +.. function:: compile_dir(dir[, maxlevels[, ddir[, force[, rx[, quiet]]]]])
> Recursively descend the directory tree named by *dir*, compiling all :file:`.py`
> files along the way. The *maxlevels* parameter is used to limit the depth of
> @@ -72,6 +71,23 @@
> If *quiet* is true, nothing is printed to the standard output in normal
> +.. function:: compile_file(fullname[, ddir[, force[, rx[, quiet]]]])
> + Compile the file with path *fullname*. If *ddir* is given, it is used as the
> + base path from which the filename used in error messages will be generated.
> + If *force* is true, modules are re-compiled even if the timestamp is up to
> + date.
Although this is copied from the other descriptions of *ddir*, it and the
other instances (and the description of the -d option) should all really
be fixed. As I discovered when writing the tests for the -d option,
what ddir is is the path that is "baked in" to the .pyc file. In very
old versions of Python that meant it was the path that would show up in
tracebacks as the path to the source file. In modern Pythons the ddir
path shows up if and only if the .py file does not exist and the .pyc
file is being run directly. In 3.2, this means it will never show up
normally, since you can't even run the .pyc file without moving it out of
__pycache__. Which means 'ddir' is henceforth useful only to those people
who want to package sourceless distributions of the python code. (If you
want to see this in action check out the -d tests in test_compileall.)
So, 'in error messages' really means 'in tracebacks if the .py file
R. David Murray www.bitdance.com
Hello Core Developers,
My name is Dimitrios and I am newbie in python. I am working on a
Project (part of my PhD) that is called Synergeticprocessing module.
Initially is imitating the multiprocessing built in module but the
processes are distributed on a LAN and not Locally. The main issue I
have is with Pickle module. And I think I found some kind of BUG in the
built in multiprocessing module.
(Synergeticprocessing module is located at GitHub:
Starting with the "BUG". In case someone uses the multiprocessing.Pool
of processes he/she has to face the problem of types.MehtodType
Impossible pickling. That is you cannot dispatch an class instance
method to the to the Process Pool. However, digging in to the Source
Code of the module there are few lines that resolve this issue however
are not activated or they are faultily activated so they do not work.
This is the 'BUG'
# Try making some callable types picklable
from pickle import Pickler
dispatch = Pickler.dispatch.copy()
def register(cls, type, reduce):
def dispatcher(self, obj):
rv = reduce(obj)
cls.dispatch[type] = dispatcher
if m.im_self is None:
return getattr, (m.im_class, m.im_func.func_name)
return getattr, (m.im_self, m.im_func.func_name)
return getattr, (m.__objclass__, m.__name__)
# return getattr, (m.__self__, m.__name__)
The RED lines are not doing the job, for some reason they are not
managing to register the GREEN function as a global reduce/pickling
function even if you call the registration function into you __main__
The solution I found is just to do this
if m.im_self is None:
return getattr, (m.im_class, m.im_func.func_name)
return getattr, (m.im_self, m.im_func.func_name)
Doing that everything works FINE. But ONLY for local methods i.e. the
ones that their class is defined on the __main__ script or other import-ed.
In case you want to send something remotely (in an other machine) or to
an other __main__ script running separately then you get a message like
'module' object has no attribute '<my_class>'
The only way to resolve this is firstly to import a script that has
<my_class> defined there and everything works fine.
SO the problems it seems to be that the *m.im_class* (look code above)
has some attribute __module__ defined as __module__ = '__main__' or
something like that. And this is the reason why remote script cannot
execute the function. I mean that the _reduce_method() above DOES is
pickling the whole CLASS object so there is no reason not to be executed
at the remote script. Besides it does as mentioned above in you just
import this the user defined class form an other script.
I have already spent about 12 weeks working on building my
synergeticPool and resolve the issue of Pickling and only 2 days needed
for the code of the Pool and the rest of the time was spent for the
Pickling issues, and study all the Class related mechanics of python.
That was the reason I ve started digging the multipocessessing module
and found this say 'BUG', and finally sent this email.
I noticed that changes related to PEP 3147 and PEP 3149 in Doc haven’t
been accompanied by versionadded/versionchanged directives.
Is that on purpose, meaning that everyone should be aware of these PEPs
when reading 3.2 docs, or just an oversight?
On 16 December 2010 00:23, eric.araujo <python-checkins(a)python.org> wrote:
> Author: eric.araujo
> Date: Thu Dec 16 01:23:30 2010
> New Revision: 87296
> Advertise “python -m” instead of direct filename.
> Modified: python/branches/py3k/Doc/library/test.rst
> --- python/branches/py3k/Doc/library/test.rst (original)
> +++ python/branches/py3k/Doc/library/test.rst Thu Dec 16 01:23:30 2010
> @@ -168,14 +168,14 @@
> Running :mod:`test.regrtest` directly allows what resources are available
> tests to use to be set. You do this by using the ``-u`` command-line
> -option. Run :program:`python regrtest.py -uall` to turn on all
> +option. Run :program:`python -m regrtest -uall` to turn on all
Shouldn't this be `python -m test.regrtest`, or even just `python -m test`?
> resources; specifying ``all`` as an option for ``-u`` enables all
> possible resources. If all but one resource is desired (a more common
> case), a
> comma-separated list of resources that are not desired may be listed after
> -``all``. The command :program:`python regrtest.py -uall,-audio,-largefile`
> +``all``. The command :program:`python -m regrtest -uall,-audio,-largefile`
> will run :mod:`test.regrtest` with all resources except the ``audio`` and
> ``largefile`` resources. For a list of all resources and more command-line
> -options, run :program:`python regrtest.py -h`.
> +options, run :program:`python -m regrtest -h`.
> Some other ways to execute the regression tests depend on what platform
> tests are being executed on. On Unix, you can run :program:`make test` at
> Python-checkins mailing list
> Author: lukasz.langa
> New Revision: 87299
> Broken ConfigParser removed, SafeConfigParser renamed to ConfigParser.
> Life is beatiful once again.
IIIUC, this change makes bugs requesting use of SafeConfigParser in
distutils and logging obsolete.
> @@ -1139,6 +1122,6 @@
> if __name__ == "__main__":
> if "-c" in sys.argv:
> - test_coverage('/tmp/cmd.cover')
> + test_coverage('/tmp/configparser.cover')
Consider using the tempfile module. You need to print the filename on
stderr, I think. Alternatively, remove this custom functionality
entirely and move it to regrtest or unittest.
On Wed, 15 Dec 2010 01:53:37 +0100 (CET)
ezio.melotti <python-checkins(a)python.org> wrote:
> Author: ezio.melotti
> Date: Wed Dec 15 01:53:37 2010
> New Revision: 87253
> #363: add automatically release managers to the nosy list for release blockers. Initial patch by Georg Brandl.
You should probably add deferred blockers too.
I know it is late to add features in beta release, but still I thought I
would ask for a little leeway for these issues, especially as they don't
change any API signatures.
Has patch, tests and docs
I have patch ready and shall add the tests and docs too.
Nothing is dependent on those changes, just that that it would be good to
Any suggestions on the above? Georg, is it okay if I push this in before
There's one last thing that needs to be done with configparser for 3.2.
Raymond, Fred, Michael and Georg already expressed their approval on that so
unless anybody finds a flaw in the idea expressed below, I'm going to make
the change for 3.2b2:
- the ConfigParser class will be removed
- the SafeConfigParser class will be renamed to ConfigParser
- 2to3 will rename SafeConfigParser classes to ConfigParser
- 2to3 will warn on the subtle behaviour change when ConfigParser classes
What's the difference?
Both ConfigParser and SafeConfigParser implement interpolation, e.g. option
values can contain special tokens similar to those implemented by Python's
string formatting: %(example)s. These tokens are replaced during get()
operations by values from respective keys (either from the current section
or from the DEFAULT section).
SafeConfigParser was originally introduced to fix a couple of ConfigParser
- when a token didn't match the %(name)s syntax, it was simply treated as
a raw value. This caused configuration errors like %var or %(no_closing_s)
to be missed.
- if someone actually wanted to store arbitrary strings in values, including
Python formatting strings, there was no way to escape %(name)s in the
configuration. The programmer had to know in advance that some value may
hold %(name)s and only get() values from that option using
get('section', 'option', raw=True)
Then however, that option could not use interpolation anymore.
- set() originally allowed to store non-string values in the parser. This
was not meant to be a feature and caused trouble when the user tried to
save the configuration to a file or get the stored values back using typed
SafeConfigParser solves these problems by validating interpolation syntax
(only %(name)s or %% are allowed, the latter being an escaped percent sign)
and raising exceptions on syntax errors, and validating type on set()
operations so that no non-string values can be passed to the parser using
Why change that?
When ConfigParser was left alone, it remained the default choice for most
end users, including our own distutils and logging libs. This was a very
weak choice, and most current ConfigParser users are not aware of the
interpolation quirks. I had to close a couple of issues related to people
trying to store non-string values internally in the parser.
The current situation needlessly complicates the documentation. Explaining
all the above quirks to each new user who only wants to parse an INI file is
weak at best. Moreover, users trust Python to do the right thing by default
and according to their intuition. In this case, going for the default
configparser.ConfigParser class without consulting the documentation is
clearly a suboptimal choice.
One last argument is that SafeConfigParser is an awkward name. It implicates
the other parsers are somehow unsafe, or that this specific parser protects
users from something. This is generally considered a naming antipattern.
You might ask whether this can be done for 3.2 (e.g. is that a feature or
a bugfix). In Raymond's words, the beta process should be used to flesh out
the APIs, test whether they work as expected and fix suboptimal decisions
before we hit the release candidate stage. He consideres this essentially
a bugfix and I agree.
You might ask why do that now and not for 3.3. We believe that 3.2 is the
last possible moment of introducing a change like that. The adoption rate is
currently still low and application authors porting projects from 2.x expect
incompatibilities. When they are non-issues, handled by 2to3, there's
nothing to be afraid of.
But isn't that... INCOMPATIBLE?!
Yes, it is. Thanks to the low py3k adoption rate now's the only moment where
there's marginal risk of introducing silent incompatibility (there are
hardly any py3k projects out there). Projects ported from Python 2.x will be
informed by 2to3 of the change. We believe that this will fix more bugs than
Support for bare % signs would be the single case where ConfigParser might
have appeared a more natural solution. In those cases we expect that users
will rather choose to turn off interpolation whatsoever.
If you have any strong, justified arguments against this bugfix, speak up.
Otherwise the change will be made on Thursday.
tel. +48 791 080 144
I'm not sure where to report this but the online doc appears to be
mismatched to python-2.6.5 for the logging module.
Specifically, for a dir of an instance of a LogRecord, I'm seeing:
['__doc__', '__init__', '__module__', '__str__', 'args', 'created',
'exc_info', 'exc_text', 'filename', 'funcName', 'getMessage',
'levelname', 'levelno', 'lineno', 'module', 'msecs', 'msg', 'name',
'pathname', 'process', 'processName', 'relativeCreated', 'thread',
while the documentation lists a different set of attributes including "lvl".
This issue was brought to my notice today:
and reference was made in the comments to possible obstacles facing stdlib
maintainers who might wish to use logging in the stdlib and in its unit tests.
>From my perspective and as mentioned in the logging documentation, library code
which uses logging should add a NullHandler instance to any top-level logger,
which will avoid any "No handlers could be found for logger XXX" message if no
logging handlers have been set up. This applies to stdlib code, too, though it
would be good if a logger namespace could be agreed for stdlib usage. (The
logging package itself uses the logger "py.warnings" to redirect warnings to
logging when configured to do so. Perhaps we could standardize on "py.XXX" for
I would suggest that when unit testing, rather than adding StreamHandlers to log
to stderr, that something like TestHandler and Matcher from this post:
This will allow assertion checking of logged messages without resorting to
StringIO, getvalue() etc. If people think it's a good idea, I can add the
TestHandler/Matcher classes to the unit test infrastructure (they wouldn't
become part of the public API, at least until 3.3, but I presume they could be
used in the stdlib unit tests).
On the question of using logging in the stdlib and its unit tests, I would like
to throw this open to python-dev: is there anyone here who wants to use logging
but feels that they can't because of some shortcoming in logging? AFAIK there
should be no obstacles. The preferred approach in stdlib code is as I have
outlined above: add a NullHandler to top-level loggers to avoid
misconfiguration, document the logger names you use, and avoid adding any other
handlers by default. For unit testing, add a TestHandler (or equivalent) to your
top-level logger at the start of the test, make any assertions you need to
regarding what has been logged, remove the handler and close it at the end of
the test. I don't believe there is any inherent conflict or incompatibility
between logger usage in the stdlib, use of logging assertions in unit tests and
use of handlers by application code, but I am happy to have any mistake on my
part pointed out.
>From what I've seen, concurrent.futures adds a StreamHandler which is removed in
the unit test and replaced by StreamHandler pointing to a different stream.
This, I believe, should be changed in line with what I've said above.