Hi.
[Mark Hammond]
> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
>
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> cause.
>
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
[GvR]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> >
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> >
> > print f()
> > </frag>
> >
> > # print 3.
> >
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
>
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
>
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
>
Yes this can be easy to implement but more confusing situations can arise:
<frag>
y=3
def f():
y=9
exec "y=2"
def g():
return y
return y,g()
print f()
</frag>
What should this print? the situation leads not to a canonical solution
as class def scopes.
or
<frag>
def f():
from foo import *
def g():
return y
return g()
print f()
</frag>
[Mark Hammond]
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
>
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
>
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
confusion.
[Andrew Kuchling]
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
>
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
>
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
PEP: 0???
Title: Support for System Upgrades
Version: $Revision: 0.0 $
Author: mal(a)lemburg.com (Marc-Andr? Lemburg)
Status: Draft
Type: Standards Track
Python-Version: 2.3
Created: 19-Jul-2001
Post-History:
Abstract
This PEP proposes strategies to allow the Python standard library
to be upgraded in parts without having to reinstall the complete
distribution or having to wait for a new patch level release.
Problem
Python currently does not allow overriding modules or packages in
the standard library per default. Even though this is possible by
defining a PYTHONPATH environment variable (the paths defined in
this variable are prepended to the Python standard library path),
there is no standard way of achieving this without changing the
configuration.
Since Python's standard library is starting to host packages which
are also available separately, e.g. the distutils, email and PyXML
packages, which can also be installed independently of the Python
distribution, it is desireable to have an option to upgrade these
packages without having to wait for a new patch level release of
the Python interpreter to bring along the changes.
Proposed Solutions
This PEP proposes two different but not necessarily conflicting
solutions:
1. Adding a new standard search path to sys.path:
$stdlibpath/system-packages just before the $stdlibpath
entry. This complements the already existing entry for site
add-ons $stdlibpath/site-packages which is appended to the
sys.path at interpreter startup time.
To make use of this new standard location, distutils will need
to grow support for installing certain packages in
$stdlibpath/system-packages rather than the standard location
for third-party packages $stdlibpath/site-packages.
2. Tweaking distutils to install directly into $stdlibpath for the
system upgrades rather than into $stdlibpath/site-packages.
The first solution has a few advantages over the second:
* upgrades can be easily identified (just look in
$stdlibpath/system-packages)
* upgrades can be deinstalled without affecting the rest
of the interpreter installation
* modules can be virtually removed from packages; this is
due to the way Python imports packages: once it finds the
top-level package directory it stay in this directory for
all subsequent package submodule imports
* the approach has an overall much cleaner design than the
hackish install on top of an existing installation approach
The only advantages of the second approach are that the Python
interpreter does not have to changed and that it works with
older Python versions.
Both solutions require changes to distutils. These changes can
also be implemented by package authors, but it would be better to
define a standard way of switching on the proposed behaviour.
Scope
Solution 1: Python 2.3 and up
Solution 2: all Python versions supported by distutils
Credits
None
References
None
Copyright
This document has been placed in the public domain.
Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:
--
Marc-Andre Lemburg
CEO eGenix.com Software GmbH
_______________________________________________________________________
eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,...
Python Consulting: http://www.egenix.com/
Python Software: http://www.egenix.com/files/python/
Now that I'm relieved by sharing with you my feelings, ;-)
what's the best path to get python-bz2 module into Python 2.3?
Do you think it's something which should be in the core, or it'd
be better to keep it as an external module?
The code is currently maintained at http://python-bz2.sf.net, if
someone wants to have a look at it.
--
Gustavo Niemeyer
[ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ]
I recently came across an announcement about the the Strongtalk system,
which contains the first fully developed strong, static type system for
Smalltalk. I wondered whether there might be useful ideas there for those
looking into static typing for Python.
http://www.cs.ucsb.edu/projects/strongtalk/pages/index.html
Hamish Lawson
Hello!
After a long time of inactivity, I hope to be back to some python
hacking. Before getting into any coding, I'd like to discuss with
you some conceptual and boring stuff I have in my mind.
In the past I have given small contributions to the python standard
distribution. Unfortunately (for myself), I slowed down until I stopped
contributing, even though I have a great affect by the interpreter. Now
I realize that one of the reasons I've stopped contributing is because
there's a large inertia in getting stuff reviewed. The reason why this
is happening seems more aparent now that I was off for a while: there's
a small core of very busy developers working on core/essential/hard
stuff *and* in code reviewing as well.
At the same time, I've seen Guido and others bothered a few times
because of the lack of man power. So the question is: how do I, a
developer which feels capable of helping in python's development, can
get some of the tasks which take your time out of your hands? Or even,
how is it possible to improve some part of the development process by
giving people like me some instructions? Also, isn't it easy to point
out what's wrong in a commit from someone who is following the
development process for a while than taking the time to review its code
in the sourceforge patch system?
My feeling is that the Python development is currently overly
centralized, and that you might be suffering from that now, by being
unable to handover some of your tasks to someone else. I feel that
everytime I send a patch, besides being contributing, I'm also
overloading you with more stuff to review and comment and etc. Perhaps
the fallback costs for some wrong commit is too high now (did I heard
someone mentioning subversion?)?!
Could someone please discuss that issues with me, and perhaps just kick
me away saying that I'm crazy? :-)
--
Gustavo Niemeyer
[ 2AAC 7928 0FBF 0299 5EB5 60E2 2253 B29A 6664 3A0C ]
Guido van Rossum <guido(a)python.org> writes:
> > > The best thing to do would perhaps to make __mro__ assignable, but
> > > with a check that ensures the above constraint. I think I'd take a
> > > patch for that.
> >
> > Shouldn't be too hard.
> >
> > > I'd also take a patch for assignable __bases__. Note that there are
> > > constraints between __bases__ and __base__.
> >
> > Should assigning to __bases__ automatically tweak __mro__ and
> > __base__? Guess so.
>
> Yes. Note that changing __base__ should not be done lightly --
> basically, the old and new base must be layout compatible, exactly
> like for assignment to __class__.
OK. I can crib code from type_set_class, I guess. Or one could just
allow assignment to __bases__ when __base__ doesn't change? __base__
is object for the majority of new-style classes, isn't it?
Brr. There's a lot I don't know about post 2.2 typeobject.c.
> > What would assigning to __base__ do in isolation?
> > Perhaps that shouldn't be writeable.
>
> Perhaps it could be writable when __bases__ is a 1-tuple.
Don't see the point of that.
> But it's fine if it's not writable.
Easier :)
> > > I'd also take a patch for assignable __name__.
> >
> > This is practically a one-liner, isn't it? Not hard, anyway.
>
> Probably. Can't remember why I didn't do it earlier.
It's a bit more complicated than that.
What's the deal wrt. dots in tp_name? Is there any way for a user
defined class to end up called "something.somthing_else"?
Oh, and while we're at it, here's a bogosity:
>>> class C(object):
... pass
...
>>> C.__module__
'__main__'
>>> C.__module__ = 1
>>> C.__module__
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: __module__
caused by lax testing in type_set_module.
> > And there was me wondering what I was going to do this evening.
>
> I don't have that problem -- a Zope customer problem was waiting for
> me today. :-(
Well, I didn't get it finished either. Fiddly, this stuff. Maybe by
tomorrow.
Cheers,
M.
--
The Internet is full. Go away.
-- http://www.disobey.com/devilshat/ds011101.htm
[Since the last message triggered no response, I'm retrying.]
I have changes in my sandbox to incorporate bsddb3 into Python
proper. This essentially is a copy of bsddb3 3.4.0, with the following
modifications:
- the extension module is called _bsddb (not bsddb3._db)
- the package is called bsddb (not bsddb3)
Both test_bsddb and test_anydbm continue to work unmodified.
Is it ok to commit these changes?
Regards,
Martin
Are you PythonLabs folks aware of this? It might be interesting &&||
relevant.
--
Cheers!
Chris Ryland
Em Software, Inc.
www.emsoftware.com
> -----Original Message-----
> From: Yahya H. Mirza [mailto:yahya_mirza@hotmail.com]
> Sent: Wednesday, October 30, 2002 9:08 AM
> To: Corin Day
> Cc: Jürg Gutknecht; davidi(a)borland.com; Ralph Johnson; George
Bosworth; Stan
> Lippman; Yukihiro Matsumoto; eliot(a)parcplace.com; David Simmons;
Gregory T.
> Sullivan; David Stutz; Dan Sugalski; ian.piumarta(a)inria.fr; David
Ungar;
> Shawn Woods; Misha Dmitriev; Jim Miller (COM+); Tony Williams; Dan Fay
> (RESEARCH); Mark Ryland; Andrew Palay; Mark Lewin; Don Syme; Greg
O'Shea;
> Pierre Sainoyant; Van Eden; Alan Borning; chambers(a)cs.washington.edu;
Marc
> E. Fiuczynski; David Notkin; zahorjan(a)cs.washington.edu;
> jonal(a)cs.washington.edu; csk(a)cs.washington.edu; Brad Merrill; Brian
Pepin;
> Brian Foote
> Subject: LAR Workshop Website Online,
>
>
> I would like to invite you all to register for LAR02:
>
> Our workshop website is now online and it can be accessed either
through the
> main OOPSLA Workshop page or directly via:
>
> http://hosting.msugs.ch/auroraborealis/Workshops/LAR02/LAR02Home.htm
> <http://hosting.msugs.ch/auroraborealis/Workshops/LAR02/LAR02Home.htm>
>
> If you would please take the time to Register even if you are
participating,
> and provide some feedback, I would greatly appreciate it.
>
> Our workshop starts at 8:45 AM this Monday November 4 at the OOPSLA
> Conference in the Seattle Convention Center in Rooms 613 - 614.
>
> Please feel free to invite whoever you think would be interested in
> participating.
>
> Finally I would like to thank Cori Day for doing a wonderful job on
the
> website.
>
> Sincerely,
>
> Yahya
>
> Yahya H. Mirza
> Aurora Borealis Software
> 8502 166th Ave. NE
> Redmond WA, 98052
> (425)-861-8147
Guido van Rossum <guido(a)python.org> writes:
> > I guess one way of doing this would be to reimplement mro resolution
> > and so on in Python, which would be annoying and inefficient. Hmm.
>
> Alas, yes. You'd have to be able to write a class's tp_mro slot
> (corresponding to the __mro__ attribute), but there's no way to do
> that in Python, because it is too easy to cause mistakes. E.g. all
> C code that currently uses tp_mro assumes that it is a tuple whose
> items are either types or classic classes.
>
> The best thing to do would perhaps to make __mro__ assignable, but
> with a check that ensures the above constraint. I think I'd take a
> patch for that.
Shouldn't be too hard.
> I'd also take a patch for assignable __bases__. Note that there are
> constraints between __bases__ and __base__.
Should assigning to __bases__ automatically tweak __mro__ and
__base__? Guess so. What would assigning to __base__ do in isolation?
Perhaps that shouldn't be writeable.
> I'd also take a patch for assignable __name__.
This is practically a one-liner, isn't it? Not hard, anyway.
And there was me wondering what I was going to do this evening.
Cheers,
M.
--
Now this is what I don't get. Nobody said absolutely anything
bad about anything. Yet it is always possible to just pull
random flames out of ones ass.
-- http://www.advogato.org/person/vicious/diary.html?start=60
Kevin Jacobs <jacobs(a)penguin.theopalgroup.com> writes:
> On 30 Oct 2002, Michael Hudson wrote:
> > For moderately nefarious reasons[1] I've being trying to write a
> > metaclass whose instances have writable __bases__. This in itself
> > isn't so hard, but having assigments to __bases__ "do the right thing"
> > has eluded me, basically because I can't seem to affect the mro.
>
> The mro is an internal data structure of new-style classes, so redefining
> mro() doesn't change the values used.
Yeah, I noticed that eventually.
> Here is my (non-working version) that
> attempts to re-assign the class of an object, although it fails on a layout
> violation with Python 2.2.2.
>
> def base_getter(cls):
> return cls.__my_bases__
>
> def base_setter(cls,bases):
> if not bases:
> bases = (object,)
> metaclass = getattr(cls, '__metaclass__', type)
> new_cls = metaclass(cls.__name__, bases, dict(cls.__dict__))
> cls.__class__ = new_cls
>
> class MetaBase(type):
> __bases__ = property(base_getter,base_setter)
>
> def __new__(cls, name, bases, ns):
> ns['__my_bases__'] = tuple(bases)
> return super(MetaBase, cls).__new__(cls, name, bases, ns)
>
> class Foo(object):
> __metaclass__ = MetaBase
> class Baz(object): pass
>
> Foo.__bases__ = Foo.__bases__ + (Baz,)
I don't think this has a hope of working does it? This approach would
need to rebind "Foo" in the last line, no?
> Which results in:
> TypeError: __class__ assignment: 'Foo' object layout differs from 'MetaBase'
>
> I haven't looked into why this is being flagged as a layout error, though
> my first instinct is to say that the check is too conservative in this case.
> I'll think about it more and dig into the code.
I think the error message gives it away: in "cls.__class__ = new_cls",
cls.__class__ is MetaBase, new_cls is the new Foo.
I guess one way of doing this would be to reimplement mro resolution
and so on in Python, which would be annoying and inefficient. Hmm.
Cheers,
M.
--
[3] Modem speeds being what they are, large .avi files were
generally downloaded to the shell server instead[4].
[4] Where they were usually found by the technical staff, and
burned to CD. -- Carlfish, asr