The ``compiler.ast`` module makes parsing Python source-code and AST
manipulation relatively painless and it's straight-forward to implement
a transformer class.
However, I find that the ``compiler.pycodegen`` module imposes a hard
limit on the length of functions since it calculates jump points in a
I'm using this module to compile an XML dynamic template into Python
code (akin to "Mako") and functions may grow to a rather large size.
Now it seems that the ``parser`` module, which parses source code into
``parser.st`` trees does not have the same limitations, however, I could
not find a transformer class compatible with its tree structure.
What's the recommended way of working with the AST tree from the
Over in PyConLand, there has been talk about trying to set up a
language summit the day before PyCon starts (the second day of
tutorials). The idea is to give the core developers and Python VM
implementers a day to sit around and talk about stuff without having
to eat into the sprints (I am not leading the organizing of it, so I
don't have any exact details beyond various ideas that have leaked
over to the program committee mailing list).
The idea was then floated about inviting the VM implementers of the
separate summit on Wednesday (first day of tutorials) where the
various dynamic language VM implementers could get together and talk.
PyCon would essentially act as the hosting site for this and as
motivation to maybe get some other VM folks to look at the language
and since no business or university would necessarily be interested
enough to make this happen.
But there is a worry that if people attended one or both of the
summits it would cut into people's sprint time. And if any sprint
group would be adversely affected by this, the core sprint would be
hit the hardest since the attendees of the sprint are the most likely
to attend either summit.
And so I have been tasked with asking people whether attending the
summits would put a crimp in their attendance of the sprints. Please
let me know if attending the summits (especially the VM one on
Wednesday) would cause you to skip out on the sprints in any way.
[switching to python-dev]
Georg Brandl wrote:
> Martin v. Löwis schrieb:
>> Raymond Hettinger wrote:
>>>> Merges should be handled by the original committer.
>>> Respectfully disagree. Some people here having been taking
>>> responsibility for keeping the branches in-sync
>> Which people specifically?
> Specifically, Christian, Benjamin and myself have done larger merges
> to the 3k branch in the past, and if svnmerge is used, I suspect will
> do the same for 2.6.
That's different, though. Does any of you has actually taken
*responsibility* to do so, either unlimited, or with some limitation?
(e.g. for a year, or until 2.7 is released, or for all changes
but bsddb and Windows).
I would be (somewhat) happy to hear that, but I wouldn't really expect
it - we are all volunteers, and we typically consider taking
responsibility (e.g. as a release manager) very carefully.
Please don't get me wrong: I very much appreciate that you volunteer,
but I don't want to discharge any committer from merging on the
assumption that someone has formally taken responsibility.
I would be skeptical relying on such a commitment, knowing that RL
can get in the way too easily. E.g. Christian disappeared for some
time, and I completely sympathize with that - but it also tells
me that I can't trust on somebody doing something unless that someone
has explicitly said that he will do that, hoping that he will tell
me when the commitment is no longer valid (the same happened, e.g.,
in the Python job board, and happens all the time in other projects -
it took me about a year before I stepped back as the PyXML maintainer).
I can *also* sympathize with committers that say "we don't want to
backport, because we either don't have the time, or the expertise
(say, to install and run svnmerge on Windows)". I just accept that
not all patches that deserve backporting actually do get backported
(just as not all patches that deserve acceptance do get accepted,
in the first place).
So now that we've released 2.6 and are working hard on shepherding 3.0
out the door, it's time to worry about the next set of releases. :)
I propose that we dramatically shorten our release cycle for 2.7/3.1
to roughly a year and put a strong focus stabilizing all the new
goodies we included in the last release(s). In the 3.x branch, we
should continue to solidify the new code and features that were
introduced. One 2.7's main objectives should be binding 3.x and 2.x
"There's nothing quite as beautiful as an oboe... except a chicken
stuck in a vacuum cleaner."
im a novice python programmer. i have made two changes to robotparser.py. i
apologize if this is the wrong list to post this mail.
1. some sites /* specially wikipedia */ returns 403 when default User-Agent
is used. so i have changed the code to use urllib2 and added a
set_user_agent method. this is simple.
2. this problem is slight complicated. please check the robots.txt file from
it contains 2 User-Agent: * lines.
These name tokens are used in User-agent lines in /robots.txt to
identify to which specific robots the record applies. The robot
must obey the first record in /robots.txt that contains a User-
Agent line whose value contains the name token of the robot as a
substring. The name comparisons are case-insensitive. If no such
record exists, it should obey the first record with a User-agent
line with a "*" value, if present. If no record satisfied either
condition, or no records are present at all, access is unlimited.
but it seems that our robotparser is obeying the 2nd one. the problem
occures because robotparser assumes that no robots.txt will contain two *
user-agent. it should not have two two such line, but in reality many site
may have two.
so i have changed the code as follow:
def _add_entry(self, entry):
if "*" in entry.useragents:
# the default entry is considered last
if self.default_entry == None:
self.default_entry = entry
and at the end of parse(self, lines) method
red marked lines are added by me.
as im a very novice python programmer, i really want some experts comment
about this matter.
i apologize again if im wasting ur times.
thanks in advance
Taskinoor Hasan Sajid
Tarek Zidae' is organizing a sprint on general
distutils/setuptools/packaging this weekend. Physically it's in
Arlington VA, but participants will be hanging out in #distutils on
More information at
--am"If you're in Fairfax County and need a lift, let me know"k
Martin v. Löwis wrote:
> So 2.6.0 will contain a lot of tests that have never been tested in
> a wide variety of systems. Some are incorrect, and get fixed in 2.6.1,
> and stay fixed afterwards. This is completely different from somebody
> introducing a new test in 2.6.4. It means that there are more failures
> in a maintenance release, not less as in the first case.
If 2.6.1 has some (possibly accidental, but exposed to the users)
behavior that is not a clear bug, it should be kept through 2.6.x.
You may well want to change it in 2.7, but not in 2.6.4. Adding a
test to 2.6.2 ensures that the behavior will not silently disappear
because of an unrelated bugfix in 2.6.3.
> For the search engine issue, is there any way we can tell robots to
> ignore the rewrite rules so they see the broken links? (although even
> that may not be ideal, since what we really want is to tell the robot
> the link is broken, and provide the new alternative)
I may be missing something obvious, but isn't this the exact intent of
HTTP response code 301 Moved Permanently