One problem I see is that what you'd really like is to overload + and
split and such on path objects. But this creates a problem if you then
pass this path object to something that expects old-fashioned strings:
if it wants to manipulate that path it will use string operations,
which suddenly have different semantics...
Recently, Just van Rossum <just(a)letterror.com> said:
> Every once in a while I wished for an path object to manipulate file system
> paths. Things like
> os.path.join(a, b, c, os.path.splitext(os.path.basename(p)) + ".ext")
> quickly get frustrating (so of course I never write them like that ;-).
> I thought of implementing a path object several times, but always stopped whe
> n I
> realized (for the Nth time ;-) that you'd then have to do something like
> file = open(p.tostring())
> whenever you want to *use* your pat. That doesn't help at all.
> But: since strings are now subclassable (there are, aren't they?) this should
> longer be a problem!
> Would it be a worthwile project to design and implement a path object for the
> standard library?
> Python-Dev mailing list
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm
> From: Andrew Kuchling [SMTP:email@example.com]
> Misc/cheatsheet is way out of date (the 'access' keyword?). I'd like
> to replace it with the text version of Simon Brunning and Richard
> Gruet's Python cheatsheet. Issues:
Sounds like a good idea to me.
> 1) Simon, do you grant permission to do that? Should I ask Richard,
Yes and yes. I've copied Richard in on this.
> 2) The new file is significantly larger: 100K instead of just 22K.
> Is that OK?
> 3) The reference will need updating for 2.1 and 2.2 changes, but
> slightly obsolete is better than waaay obsolete. Simon, do you
> want help updating it for 2.1/2.2?
Thanks, Andy, but I'm on the Python 2.1 version - it's no more than a week
I'll also do a 2.2 version as soon as I know what's going to be in it!
Iterators and generators seem to be in for sure, and after reading your
'What's New in Python 2.2' document I understand what they do, so they
aren't a problem. Ditto the other changes mentioned in your document.
But I'm not sure about the Type/Class unification stuff - I don't know if
it's going in, and I don't understand it, either! So here, I could use some
The information in this email is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this email by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in
reliance on it, is prohibited and may be unlawful. TriSystems Ltd. cannot
accept liability for statements made which are clearly the senders own.
> I think test_quopri is too latin-1 centric.
Strictly speaking, there is nothing latin-1 centric in test_quopri.py
> For instance, on my Mac, Python source is in MacRoman encoding. CVS
> knows all about this, so it happily converts the
> latin-1-upsidedown-exclam to a macroman-upsidedown-exclam, and if I
> look at the source code I see the same glyph as I see on Unix.
This is the problem. Python source code is not in Latin-1; bytes
inside strings and comments are "as-is". So the CVS "binary" mode
would come closer as to how python files should be treated, although
you'd still would want to convert line-endings.
> I'm surprised the test doesn't fail on Windows as well, or do
> Windows pythonistas generally work with source in latin1?
Most of these people probably use code page 1252, which is identical
to latin-1 except in the range 0x80 to 0x9f.
For test_quopri.py, the best thing would be to replace the characters
outside range(128) to \x escaped ones, to avoid the problem with Mac
CVS (which really is the problem here - if you unpack Python from the
source distribution, the test should pass fine on your system).
On my Mandrake 8.0 laptop I ran make test after grabbing the latest stuff
from cvs and wound up with
145 tests OK.
1 test failed: test_linuxaudiodev
14 tests skipped: test_al test_cd test_cl test_dl test_gl
test_imgfile test_largefile test_nis test_ntpath test_socketserver
test_sunaudiodev test_unicode_file test_winreg test_winsound
The linuxaudiodev test never seems to work for me (though I do hear an
agonizing wail (is that supposed to be Homer Simpson?) from my laptop's
speakers when it's run). The result is always:
test test_linuxaudiodev crashed -- linuxaudiodev.error: (16, 'Device or
The socketserver test works when run manually.
Skip Montanaro write:
> The linuxaudiodev test never seems to work for me (though I do hear an
> agonizing wail (is that supposed to be Homer Simpson?) from my laptop's
> speakers when it's run). The result is always:
> test test_linuxaudiodev crashed -- linuxaudiodev.error: (16, 'Device or
> resource busy')
> The socketserver test works when run manually.
Neil Schemenauer replied:
> Something has /dev/dsp or /dev/audio open. Are you playing MP3s or
> running something like esd?
If another process was holding /dev/whatever open, it's hard to
understand how the agonizing wail was produced.
The code in linuxaudiodev does its initializations in an order which
is explicitly proscribed by the OSS programming guide.
I have a SWIG-based linux audio module that I use in my code, not
wanting anything to do with the (to me) highly suspect linuxaudiodev
module. Should I dust off this code and contribute it?
Of course, if we really want to provide high-quality audio support on
Linux, we have to account for a number of different options - at
least, OSS and ALSA... this is sort of a can-of-worms. But I do
believe that the current linuxaudiodev module should be deprecated.
Misc/cheatsheet is way out of date (the 'access' keyword?). I'd like
to replace it with the text version of Simon Brunning and Richard
Gruet's Python cheatsheet. Issues:
1) Simon, do you grant permission to do that? Should I ask Richard,
2) The new file is significantly larger: 100K instead of just 22K.
Is that OK?
3) The reference will need updating for 2.1 and 2.2 changes, but
slightly obsolete is better than waaay obsolete. Simon, do you
want help updating it for 2.1/2.2?
> > >
> > > The point is to put the commonly called things in the vtable in a way that
> > > you can avoid as much conditional code as possible, while less common
> > > things get dealt with in ways you'd generally expect. (Dynamic lookups
> > with
> > > caching and suchlike things)
> >If I'm right, you're designing a object-based VM.
> More or less, yep, with a semi-generic two-arg dispatch for the binary
> methods. The vtable associated with each variable has a fixed set of
> required functions--add with 2nd arg an int, add with second arg a bigint,
> add with second arg a generic variable, for example. Basically all the
> things one would commonly do on data (add, subtract, multiply, divide,
> modulus, some string ops, get integer/real/string representation, copy,
> destroy, etc) have entries, with a catchall "generic method call" to
> handle, well, generic method calls.
A question: when you say variable you mean variable (perl sense of that)
or object. It has already been pointed out but it's really confusing
from the point of view of python terminology. Will perl6 have only
variables which contain objects, truly references to objects like Python
I should repeat that your explanation that assigment is somehow performed
calling a method on the "variable" is quiet a strange notion in general,
I can imagine having a slot called on assigment that eventually does a copy
or return just the object but assigment as an operation on the lvalue
is something very peculiar. I know that perl5 assignment is an operator
returning an lvalue, is this related?
> It's all designed with high-speed dispatch and minimal conditional branch
> requirements in mind, as well as encapsulating all the "what do I do on
> data" functions. Basically the opcodes generally handle control flow,
> register/stack ops, and VM management, while actual operations on variables
> is left to the vtable methods attached to the variables.
> I expect things to get mildly incestuous for speed reasons, but I'm OK with
> that. :)
> >I don't how
> >typical is OO programming in Perl, but in Python that plays a central role,
> >if your long run goal is to compile to native code you should have
> >a "hard-wired" concept of classes and them like,
> Yep. There's a fast "get name of object's class" and "get pointer to
> object's class stash/variable table/methods/subs/<insert generic class
> terminology here>".
> > because an operation
> >like getting the class of an instance should be as direct and fast as
> >if you want to use any of the "nice" optimization for a VM for a OO dynamic
> > inline polymorphic caches
> Yup. I'm leaning towards a class based cache for inherited methods and
> suchlike things, but I'm not sure that's sufficient if we're going to have
> objects whose inheritance tree is handled on a per-object basis.
Make sense for the interpreted version, but for speed call-site
caches is far more promising when native compiling, but also more complicated. I
imagine you already know e.g. the Self project literature.
> > customization
> Details? I'm not sure what you're talking about here.
Compiling different versions of the same method for example wrt to
the receiver type in single dispatch case. See below
> > or if/when possible: direct inlining.
> Yep, though the potential to mess with a class' methods at runtime tends to
> shoot this one down. (Perl's got similar issues with this, plus a few extra
> real nasty optimizer killers) I'm pondering a "check_if_changed" branch
> opcode that'll check a method/function/sub's definition to see if it's
> changed since the code was generated, and do the inline stuff if it hasn't.
> Though that still limits what you can do with code motion and other
> >If any of the high-level OO system can not be used, you have to choose
> >a lower level one and map these to it,
> >a vtable bytecode mapping model is too underspecified.
> Oh, sure, saying "vtable bytecode mapping model" is an awful lot like
> saying "procedural language"--it's informative without actually being
> useful... :)
I imagine you already know: especially when compiling to native code,
it is all a matter of optimizing for the common case, and be prepared
to be quiet slow otherwise, especially when dealing with dynamic changes,
both in Perl and Python there is no clear distiction from normal ops and
dynamic changes to methods... but in any case they are rare compared to the
rest. Threading adds complexity to the problem.
For the rest is just speed vs. memory, customization is a clear example
regards, Samuele Pedroni.
Martin has uploaded a patch which modifies the Python API level
number depending on the setting of the compile time option
for internal Unicode width (UCS-2/UCS-4):
I am not sure whether this is the right way to approach this
problem, though, since it affects all extensions -- not only
ones using Unicode.
If at all possible, I'd prefer some other means to
handle this situation (extension developers are certainly not
going to start shipping binaries for narrow and wide Python
versions if their extension does not happen to use Unicode).
Any ideas ?
CEO eGenix.com Software GmbH
Consulting & Company: http://www.egenix.com/
Python Software: http://www.lemburg.com/python/