Where can I find *really* old Python versions? I managed to find
1.2, but I want to get my hands on <1.0 versions if at all possible...
Thanks.
--
gpg --keyserver keyserver.pgp.com --recv-keys 46D01BD6 54C4E1FE
Secure (inaccessible): 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6
Insecure (accessible): C5A5 A8FA CA39 AB03 10B8 F116 1713 1BCF 54C4 E1FE
Learn Python! http://www.ibiblio.org/obp/thinkCSpy
Since iterator objects work like sequences in several contexts, maybe they
could support sequence-like operations such as addition. This would let
you write
for x in iter1 + iter2:
do_something(x)
instead of
for x in iter1:
do_something(x)
for x in iter2:
do_something(x)
or the slightly better
for i in iter1,iter2:
for x in i:
do_something(x)
-- Sami Hangaslammi --
I now have a whole stack of modules that interface to MacOS toolboxes
that compile for unix-Python on MacOSX, but I'm a bit unsure about how
I should add these to the standard build.
So far what I've checked in (in configure) is only a bit of glue that
allows the toolbox modules to be loaded, but not yet the changes to
setup.py that will actually compile and link the modules.
I can do this in two ways:
1) Keep everything as-is and just check in the mods to setup.py.
2) Make the MacOS toolbox modules dependent on a configure switch. The
toolbox glue would then also become dependent on this switch.
The first option seems to be the standard nowadays: setup.py simply
builds everything it can find and for which the prerequisite
headers/libs are found.
The second option seems a bit more friendly to Pythoneers who view
MacOSX as simply unix-with-a-pretty-face and use Python only for
command-line scripts and cgi and such. Also, the toolbox modules will
be less stable than average modules for some time to be: as they're
shared between unix-Python and MacPython and generated on the latter
the repository version might not build for a few days while I get my
act together. On the other hand: a failing compile of an extension
module shouldn't bother them overmuch, and one can always comment out
the setup.py lines.
A problem with the second option is that I have absolutely no idea how
to test for configure flags in setup.py.
To complicate matters more I'm thinking of turning Python into a
framework, which would give OSX-Python a lot of the niceties that
MacPython users are used to (applets and building standalone applications
without a C compiler, to name two). In that case many users will
probably choose either to go the whole way (install Python as a
framework and include the tooolbox modules) or forget about the macos
stuff altogether.
What do people think about this?
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | ++++ see http://www.xs4all.nl/~tank/ ++++
[GvR]
>
> > [Michael Hudson]
> > > One - probably called Compile - will sport a __call__ method which
> > > will act much like the builtin "compile" of 2.1 with the
> > > difference that after it has compiled a __future__ statement, it
> > > "remembers" it and compiles all subsequent code with the
> > > __future__ options in effect.
> > >
> > > It will do this by examining the co_flags field of any code object
> > > it returns, which in turn means writing and maintaining a Python
> > > version of the function PyEval_MergeCompilerFlags found in
> > > Python/ceval.c.
>
> > FYI, in Jython (internally) we have a series of compile_flags functions
> > that take a "opaque" object CompilerFlags that is passed to the function
> > and compilation actually change the object in order to reflect future
> > statements encoutered during compilation...
> > Not elegant but avoids code duplication.
> >
> > Of course we can change that.
>
> Does codeop currently work in Jython? The solution should continue to
> work in Jython then.
We have our interface compatible version of codeop that works.
> Does Jython support the same flag bit values as
> CPython? If not, Paul Prescod's suggestion to use keyword arguments
> becomes very relevant.
we support a subset of the co_flags, CO_NESTED e.g. is there with the same
value.
But the embedding API is very different, my implementation of nested
scopes does not define any Py_CF... flags, we have an internal CompilerFlags
object but is more similar to PyFutureFeatures ...
Samuele.
...
> > >
> > > Does codeop currently work in Jython? The solution should continue to
> > > work in Jython then.
> > We have our interface compatible version of codeop that works.
>
> Would implementing the new interfaces I sketched out for codeop.py be
> possible in Jython? That's the bit I care about, not so much the
> interface to __builtin__.compile.
Yes, it's of possible.
> > > Does Jython support the same flag bit values as
> > > CPython? If not, Paul Prescod's suggestion to use keyword arguments
> > > becomes very relevant.
> > we support a subset of the co_flags, CO_NESTED e.g. is there with the same
> > value.
> >
> > But the embedding API is very different, my implementation of nested
> > scopes does not define any Py_CF... flags, we have an internal CompilerFlags
> > object but is more similar to PyFutureFeatures ...
>
> Is this object exposed to Python code at all?
Not publicily, but in Jython the separating line is a bit different,
because public java classes are always accessible from jython,
even most of the internals. That does not mean and every use of that
is welcome and supported.
> One approach would be
> PyObject-izing PyFutureFlags and making *that* the fourth argument to
> compile...
>
> class Compiler:
> def __init__(self):
> self.ff = ff.new() # or whatever
> def __call__(self, source, filename, start_symbol):
> code = compile(source, filename, start_symbol, self.ff)
> self.ff.merge(code.co_flags)
> return code
I see, "internally" we already have a compiler_flags function
that do the same of:
> code = compile(source, filename, start_symbol, self.ff)
> self.ff.merge(code.co_flags)
where self.ff is a CompuilerFlags object.
I can re-arrange things for any interface, I was only trying to explain
our approach and situation and a possible way to avoid duplicating some
internal code in Python.
Samuele.
Samuele Pedroni <pedroni(a)inf.ethz.ch> writes:
> [GvR]
> >
> > > [Michael Hudson]
> > > > One - probably called Compile - will sport a __call__ method which
> > > > will act much like the builtin "compile" of 2.1 with the
> > > > difference that after it has compiled a __future__ statement, it
> > > > "remembers" it and compiles all subsequent code with the
> > > > __future__ options in effect.
> > > >
> > > > It will do this by examining the co_flags field of any code object
> > > > it returns, which in turn means writing and maintaining a Python
> > > > version of the function PyEval_MergeCompilerFlags found in
> > > > Python/ceval.c.
> >
> > > FYI, in Jython (internally) we have a series of compile_flags functions
> > > that take a "opaque" object CompilerFlags that is passed to the function
> > > and compilation actually change the object in order to reflect future
> > > statements encoutered during compilation...
> > > Not elegant but avoids code duplication.
> > >
> > > Of course we can change that.
> >
> > Does codeop currently work in Jython? The solution should continue to
> > work in Jython then.
> We have our interface compatible version of codeop that works.
Would implementing the new interfaces I sketched out for codeop.py be
possible in Jython? That's the bit I care about, not so much the
interface to __builtin__.compile.
> > Does Jython support the same flag bit values as
> > CPython? If not, Paul Prescod's suggestion to use keyword arguments
> > becomes very relevant.
> we support a subset of the co_flags, CO_NESTED e.g. is there with the same
> value.
>
> But the embedding API is very different, my implementation of nested
> scopes does not define any Py_CF... flags, we have an internal CompilerFlags
> object but is more similar to PyFutureFeatures ...
Is this object exposed to Python code at all? One approach would be
PyObject-izing PyFutureFlags and making *that* the fourth argument to
compile...
class Compiler:
def __init__(self):
self.ff = ff.new() # or whatever
def __call__(self, source, filename, start_symbol):
code = compile(source, filename, start_symbol, self.ff)
self.ff.merge(code.co_flags)
return code
Cheers,
M.
--
Like most people, I don't always agree with the BDFL (especially
when he wants to change things I've just written about in very
large books), ...
-- Mark Lutz, http://python.oreilly.com/news/python_0501.html
Paul Prescod <paulp(a)ActiveState.com> writes:
> Michael Hudson wrote:
> >
> >...
> >
> > At one point I was going to use the same bits as are used in the
> > code.co_flags field, which was probably where the bitfield idea
> > originated.
> >
> > By "keyword arguments" do you mean e.g:
> >
> > compile(source, file, start_symbol, generators=1, division=0)
> >
> > ? I think that would be mildly painful for the one use I had in mind
> > (the additions to codeop), and also mildly painful to implement.
>
> Sorry, could you elaborate on why this is painful to use and implement?
Well, I don't know in detail how keyword arguments work from the C
side. Your suggestion turns a roughly 4 line change I knew exactly
how to do into a 20-30 line change I'd have to work on. I only said
"mildly painful". The awkwardness of use would just mean using **,
yes.
> Considering the availability of **args, the code above looks to me like
> syntactic sugar for the code below:
>
> > compile(source, file, start_symbol, {'generators':1, 'division':0})
Well yes, but I think the latter is closer to what one means, which is
to say passing a (i.e. one) set of options.
> > would be better from my point of view. I think this is a bit of a
> > propeller-heads-only feature, to be honest, so I'm not that inclined
> > to worry aobut the API.
>
> I would just like to see an end to the convention of using bitfields in
> Python everywhere. You're just my latest target.
Fair enough. I've probably been corrupted by C on this one.
> Python is not a really great bit-manipulation language!
<aside>Augmented assignment helps a *lot* here!</aside>
At any rate, the fact that I'd temporarily forgotten about the
existence of Jython is the more serious blunder...
Cheers,
M.
--
. <- the point your article -> .
|------------------------- a long way ------------------------|
-- Cristophe Rhodes, ucam.chat
Paul Prescod <paulp(a)ActiveState.com> writes:
> Michael Hudson wrote:
> >
> >...
> > I propose adding a fourth, optional, "flags" argument to the
> > builtin "compile" function. If this argument is omitted, there
> > will be no change in behaviour from that of Python 2.1.
> >
> > If it is present it is expected to be an integer, representing
> > various possible compile time options as a bitfield.
>
> Nit: What is the virtue to using a C-style bitfield? The efficiency
> isn't much of an issue. I'd prefer either keyword arguments or a list of
> strings.
Err, hadn't really occured to me to do anything else, to be honest!
At one point I was going to use the same bits as are used in the
code.co_flags field, which was probably where the bitfield idea
originated.
By "keyword arguments" do you mean e.g:
compile(source, file, start_symbol, generators=1, division=0)
? I think that would be mildly painful for the one use I had in mind
(the additions to codeop), and also mildly painful to implement.
compile(source, file, start_symbol,{'generators':1, 'division':0})
would be better from my point of view. I think this is a bit of a
propeller-heads-only feature, to be honest, so I'm not that inclined
to worry aobut the API.
Cheers,
M.
--
3. Syntactic sugar causes cancer of the semicolon.
-- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html
Hi.
[Michael Hudson]
> One - probably called Compile - will sport a __call__ method which
> will act much like the builtin "compile" of 2.1 with the
> difference that after it has compiled a __future__ statement, it
> "remembers" it and compiles all subsequent code with the
> __future__ options in effect.
>
> It will do this by examining the co_flags field of any code object
> it returns, which in turn means writing and maintaining a Python
> version of the function PyEval_MergeCompilerFlags found in
> Python/ceval.c.
FYI, in Jython (internally) we have a series of compile_flags functions
that take a "opaque" object CompilerFlags that is passed to the function
and compilation actually change the object in order to reflect future
statements encoutered during compilation...
Not elegant but avoids code duplication.
Of course we can change that.
Samuele Pedroni.
Guido van Rossum <guido(a)zope.com> writes:
> > Not directly relavent to the PEP, but...
> >
> > Guido van Rossum <guido(a)zope.com> writes:
> >
> > > Q. What about code compiled by the codeop module?
> > >
> > > A. Alas, this will always use the default semantics (set by the -D
> > > command line option). This is a general problem with the
> > > future statement; PEP 236[4] lists it as an unresolved
> > > problem. You could have your own clone of codeop.py that
> > > includes a future division statement, but that's not a general
> > > solution.
> >
> > Did you look at my Nasty Hack(tm) to bodge around this? It's at
> >
> > http://starship.python.net/crew/mwh/hacks/codeop-hack.diff
> >
> > if you haven't. I'm not sure it will work with what you're planning
> > for division, but it works for generators (and worked for nested
> > scopes when that was relavent).
>
> Ouch. Nasty. Hat off to you for thinking of this!
I'll choose to take this as a positive remoark :-)
> > There are a host of saner ways round this, of course - like adding an
> > optional "flags" argument to compile, for instance.
>
> We'll have to keep that in mind.
Here's a fairly short pre-PEP on the issue. If I haven't made any
gross editorial blunders, can Barry give it a number and check the
sucker in?
PEP: XXXX
Title: Supporting __future__ statements in simulated shells
Version: $Version:$
Author: Michael Hudson <mwh(a)python.net>
Status: Draft
Type: Standards Track
Requires: 0236
Created: 30-Jul-2001
Python-Version: 2.2
Post-History:
Abstract
As noted in PEP 263, there is no clear way for "simulated
interactive shells" to simulate the behaviour of __future__
statements in "real" interactive shells, i.e. have __future__
statements' effects last the life of the shell.
This short PEP proposes to make this possible by adding an
optional fourth argument to the builtin function "compile" and
adding machinery to the standard library modules "codeop" and
"code" to make the construction of such shells easy.
Specification
I propose adding a fourth, optional, "flags" argument to the
builtin "compile" function. If this argument is omitted, there
will be no change in behaviour from that of Python 2.1.
If it is present it is expected to be an integer, representing
various possible compile time options as a bitfield. The
bitfields will have the same values as the PyCF_* flags #defined
in Include/pythonrun.h (at the time of writing there are only two
- PyCF_NESTED_SCOPES and PyCF_GENERATORS). These are currently
not exposed to Python, so I propose adding them to codeop.py
(because it's already here, basically).
XXX Should the supplied flags be or-ed with the flags of the
calling frame, or do we override them? I'm for the former,
slightly.
I also propose adding a pair of classes to the standard library
module codeop.
One - probably called Compile - will sport a __call__ method which
will act much like the builtin "compile" of 2.1 with the
difference that after it has compiled a __future__ statement, it
"remembers" it and compiles all subsequent code with the
__future__ options in effect.
It will do this by examining the co_flags field of any code object
it returns, which in turn means writing and maintaining a Python
version of the function PyEval_MergeCompilerFlags found in
Python/ceval.c.
Objects of the other class added to codeop - probably called
CommandCompiler or somesuch - will do the job of the existing
codeop.compile_command function, but in a __future__-aware way.
Finally, I propose to modify the class InteractiveInterpreter in
the standard library module code to use a CommandCompiler to
emulate still more closely the behaviour of the default Python
shell.
Backward Compatibility
Should be very few or none; the changes to compile will make no
difference to existing code, nor will adding new functions or
classes to codeop. Exisiting code using
code.InteractiveInterpreter may change in behaviour, but only for
the better in that the "real" Python shell will be being better
impersonated.
Forward Compatibility
codeop will require very mild tweaking as each new __future__
statement is added. Such events will hopefully be very rare, so
such a burden is unlikely to cause significant pain.
Implementation
None yet; none of the above should be at all hard. If this draft
is well received, I'll upload a patch to sf "soon" and point to it
here.
Copyright
This document has been placed in the public domain.
--
ARTHUR: The ravenours bugblatter beast of Traal ... is it safe?
FORD: Oh yes, it's perfectly safe ... it's just us who are in
trouble.
-- The Hitch-Hikers Guide to the Galaxy, Episode 6