I'm trying to modify the doctest DocTestParser so it will parse docstring code snippets out of a *py file. (Although doctest can parse these with another method out of *pyc, it is missing certain decorated functions and we would also like to insist of import of needed modules rather and that method automatically loads everything from the module containing the code.)
I need to find code snippets which are located in docstrings. Docstrings, being string literals should be able to be parsed out with tokenize. But tokenize is giving the wrong results (or I am doing something wrong) for this (pathological) case:
A quoted triple quote is not a closing
of this docstring:
>>> print '"""'
""" # <-- this is the closing quote
Here is how I tokenize the file:
import re, tokenize
DOCSTRING_START_RE = re.compile('\s+[ru]*("""|' + "''')")
for ti in tokenize.generate_tokens(o.next):
typ = ti
text = ti[-1]
if typ == tokenize.STRING:
DOCSTRING: ' """\n A quoted triple quote is not a closing\n of this docstring:\n >>> print \'"""\'\n'
DOCSTRING: ' """\n """ # <-- this is the closing quote\n'
There should be only one string tokenized, I believe. The PythonWin editor parses (and colorizes) this correctly, but tokenize (or I) are making an error.
Thanks for any help,
> -----Original Message-----
> From: python-ideas-bounces+kristjan=ccpgames.com(a)python.org
> [mailto:email@example.com] On
> Behalf Of Sturla Molden
> Sent: 20. október 2009 22:13
> To: python-ideas(a)python.org
> Subject: Re: [Python-ideas] Remove GIL with CAS instructions?
> - The GIL has consequences on multicore CPUs that are overlooked:
> switches are usually missed at check intervals. This could be fixed
> without removing the GIL: For example, there could be a wait-queue for
> the GIL; a thread that request the GIL puts itself in the back.
This depends entirely on the platform and primitives used to implement the GIL.
I'm interested in windows. There, I found this article:
So, you may be on to something. Perhaps a simple C test is in order then?
I did that. I found, on my dual-core vista machine, that running "release", that both Mutexes and CriticalSections behaved as you describe, with no "fairness". Using a "semaphore" seems to retain fairness, however.
"fairness" was retained in debug builds too, strangely enough.
Now, Python uses none of these. On windows, it uses an "Event" object coupled with an atomically updated counter. This also behaves fairly.
The test application is attached.
I think that you ought to sustantiate your claims better, maybe with a specific platform and using some test like the above.
On the other hand, it shows that we must be careful what we use. There has been some talk of using CriticalSections for the GIL on windows. This test ought to show the danger of that. The GIL is different than a regular lock. It is a reverse-lock, really, and therefore may need to be implemented in its own special way, if we want very fast mutexes for the rest of the system (cc to python-dev)
I'd like to get a second opinion on bug 7183:
The Boost folks have reported this as a regression in 2.6.3, making it
a candidate for Python 2.6.4. IIUC, the latest version of Boost fixes
the problem in their code, but if it really is a regression it could
affect other projects and should probably be fixed.
Robert Collins from the Bazaar team pinged me on IRC and originally
thought that they'd been bitten by this problem too, though he looked
around and determined it probably did /not/ affect them after all. So
we have only anecdotal evidence of a problem, and no reproducible test
If the Python 2.6.4 release is to be held up for this issue, we need
to know today or tomorrow. Come the weekend, I'm going ahead with the
tag and release. Holding up the release will mean another release
candidate, and a wait of another week for the final.
So does anybody else think bug 7183 should be a release blocker for
2.6.4 final, or is even a legitimate but that we need to fix?
I'd like to turn over the organization of the VM and Python Language
Summits at PyCon 2010 to someone else, one or two people. (The same
person doesn't need to organize both of them.)
Why: in November PyCon will be three months away, so the guest list
needs to be finalized and the invitations need to be sent. Yet I
can't pull together the motivation to work on them; I contemplate the
task for two minutes and then go do something else. It's obviously
better if the summit tasks are being actively worked on instead of
just drifting, so I want to give it up now.
What's required: chiefly it's just a matter of sending and replying to
e-mail. Draw up a guest list (I can provide last year's lists); think
of new people & projects to be added, or e-mail someone to ask for
suggestions; send out invitations and requests for agenda items;
collect the responses so we know how many people are coming.
You can also help moderate the summits on the day of the events, but
if that's not feasible someone else could do it, or the groups could
(Also sent to pycon-organizers, psf-members.)
issue #7033 proposes a new C API that creates a new exception class with
a docstring. Since exception classes should be documented, and adding
the docstring after creating is not a one-liner, I would say it is a useful
addition. What do you all think?
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.
as some of you know, recently I've released an arbitrary precision
C library for decimal arithmetic together with a Python module:
Both the library and the module have been tested extensively. Fastdec
currently differs from decimal.py in a couple of ways that could be
fixed. The license is AGPL, but if there is interest in integrating it
into Python I'd release it under a Python-compatible license.
There have been several approaches towards getting C decimal arithmetic
Fastdec follows Raymond Hettinger's suggestion to provide wrappers for
an independent C implementation. Arguments in favour of fastdec are:
* Complete implementation of Mike Cowlishaw's specification
* C library can be tested independently
* Redundant arithmetic module for tests against decimal.py
* Faster than Java BigDecimal
* Compares relatively well in speed against gmpy
To be clear, I would not want to _replace_ decimal.py. Rather I'd like to
see a cdecimal module alongside decimal.py.
I know that ultimately there should be a PEP for module inclusions. The
purpose of this post is to gauge interest. Specifically:
1. Are you generally in favour of a C decimal module in Python?
2. Would fastdec - after achieving full decimal.py compatibility - be
a serious candidate?
3. Could I use this list to settle a couple of questions, or would perhaps
a Python developer be willing to work with me to make it compatible? I'm
asking this to avoid doing work that would not find acceptance afterwards.
In Objects/longobject.c, there's the SIGCHECK() macro which periodically checks
for signals when doing long integer computations (divisions, multiplications).
It does so by messing with the _Py_Ticker variable.
It was added in 1991 under the title "Many small changes", and I suppose it was
useful back then.
However, nowadays long objects are ridiculously fast, witness for example:
$ ./py3k/python -m timeit -s "a=eval('3'*10000+'5');b=eval('8'*6000+'7')"
1000 loops, best of 3: 1.47 msec per loop
Can we remove this check, or are there people doing million-digits calculations
they want to interrupt using Control-C ?