Hi all, I'm a little confused by the following exception: File "f:\src\python-cvs\xpcom\server\policy.py", line 18, in ? from xpcom import xpcom_consts, _xpcom, client, nsError, ServerException, COMException exceptions.SyntaxError: BuildInterfaceInfo: exec or 'import *' makes names ambiguous in nested scope (__init__.py, line 71) This sounds alot like Tim's question on this a while ago, and from all accounts this had been resolved (http://mail.python.org/pipermail/python-dev/2001-February/012456.html) In that mail, Jeremy writes: -- quote --
from Percolator import Percolator
in a class definition. That smells like a bug, not a debatable design choice.
Percolator has "from x import *" code. This is what is causing the exception. I think it has already been fixed in CVS though, so should work again. -- end quote -- However, Tim replied saying that it still didn't work for him. There was never a followup saying "it does now". In this case, the module being imported from does _not_ use "from module import *" at all, but is a parent package. The only name referenced by the __init__ function is "ServerException", and that is a simple class. The only "import *" I can track is via the name "client", which is itself a package and does the "import *" about 3 modules deep. Any clues? Thanks, Mark.
On Tue, Feb 20, 2001 at 11:12:23PM +1100, Mark Hammond wrote:
Hi all, I'm a little confused by the following exception:
File "f:\src\python-cvs\xpcom\server\policy.py", line 18, in ? from xpcom import xpcom_consts, _xpcom, client, nsError, ServerException, COMException
exceptions.SyntaxError: BuildInterfaceInfo: exec or 'import *' makes names ambiguous in nested scope (__init__.py, line 71)
[ However, no 'from foo import *' to be found, except at module level ]
Any clues?
I don't have the xpcom package, so I can't check myself, but have you
considered 'exec' as well as 'from foo import *' ?
--
Thomas Wouters
[Thomas]
I don't have the xpcom package, so I can't check myself,
As of the last 24 hours, it sits in the Mozilla CVS tree - extensions/python/xpcom :)
but have you considered 'exec' as well as 'from foo import *' ?
exec appears exactly once, in a function in the "client" sub-package. Mark.
"MH" == Mark Hammond
writes:
MH> [Thomas]
I don't have the xpcom package, so I can't check myself,
MH> As of the last 24 hours, it sits in the Mozilla CVS tree - MH> extensions/python/xpcom :) Don't know where to find that :-)
but have you considered 'exec' as well as 'from foo import *' ?
MH> exec appears exactly once, in a function in the "client" MH> sub-package. Does the function that contains the exec also contain another function or lambda? If it does and the contained function has references to non-local variables, the compiler will complain. The exception should include the line number of the first line of the function body that contains the import * or exec. Jeremy
MH> As of the last 24 hours, it sits in the Mozilla CVS tree - MH> extensions/python/xpcom :)
Don't know where to find that :-)
I could tell you if you like :)
but have you considered 'exec' as well as 'from foo import *' ?
MH> exec appears exactly once, in a function in the "client" MH> sub-package.
Does the function that contains the exec also contain another function or lambda? If it does and the contained function has references to non-local variables, the compiler will complain.
It appears this is the problem. The fact that only "__init__.py" was listed threw me - I have a few of them :) *sigh* - this is a real shame. IMO, we can't continue to break existing code, even if it is good for me! People are going to get mighty annoyed - I am. And if people on python-dev struggle with some of the new errors, the poor normal users are going to feel even more alienated. Mark.
Does the function that contains the exec also contain another function or lambda? If it does and the contained function has references to non-local variables, the compiler will complain.
It appears this is the problem. The fact that only "__init__.py" was listed threw me - I have a few of them :)
*sigh* - this is a real shame. IMO, we can't continue to break existing code, even if it is good for me! People are going to get mighty annoyed - I am. And if people on python-dev struggle with some of the new errors, the poor normal users are going to feel even more alienated.
Sigh indeed. We could narrow it down to only raise the error if there are nested functions or lambdas that don't reference free variables, but unfortunately most of them will reference at least some builtin e.g. str()... How about the old fallback to using straight dict lookups when this combination of features is detected? --Guido van Rossum (home page: http://www.python.org/~guido/)
Does the function that contains the exec also contain another function or lambda? If it does and the contained function has references to non-local variables, the compiler will complain.
It appears this is the problem. The fact that only "__init__.py" was
Hello. listed
threw me - I have a few of them :)
*sigh* - this is a real shame. IMO, we can't continue to break existing code, even if it is good for me! People are going to get mighty annoyed - I am. And if people on python-dev struggle with some of the new errors, the poor normal users are going to feel even more alienated.
Sigh indeed. We could narrow it down to only raise the error if there are nested functions or lambdas that don't reference free variables, but unfortunately most of them will reference at least some builtin e.g. str()...
How about the old fallback to using straight dict lookups when this combination of features is detected?
I'm posting an opinion on this subject because I'm implementing nested scopes in jython. It seems that we really should avoid breaking code using import * and exec, and to obtain this - I agree - the only way is to fall back to some straight dictionary lookup, when both import or exec and nested scopes are there But doing this AFAIK related to actual python nested scope impl and what I'm doing on jython side is quite messy, because we will need to keep around "chained" closures as entire dictionaries, because we don't know if an exec or import will hide some variable from an outer level, or add a new variable that then cannot be interpreted as a global one in nested scopes. This is IMO too much heavyweight. Another way is to use special rules (similar to those for class defs), e.g. having <frag> y=3 def f(): exec "y=2" def g(): return y return g() print f() </frag> # print 3. Is that confusing for users? maybe they will more naturally expect 2 as outcome (given nested scopes). The last possibility (but I know this one has been somehow discarded) is to have scoping only if explicitly declared; I imagine something like <frag> y=3 def f(): let y exec "y=2" def g(): return y return g() print f() </frag> # print 2. Issues with this: - with implicit scoping we naturally obtain that nested func defs can call themself recursively: * we can require a let for this too * we can introduce "horrible" things like 'defrec' or 'deflet' * we can have def imply a let: this breaks def get_str(): def str(v): return "str: "+str(v) return str but nested scopes as actually implemented already break that. - with this approach inner scopes can change the value of outer scope vars: this was considered a non-feature... - what's the gain with this approach? if we consider code like this: def f(str): # eg str = "y=z" from foo import * def g(): exec str return y return g without explicit 'let' decls if we want to compile this and not just say "you can't do that" the closure of g should be constructed out of the entire runtime namespace of f. With explicit 'let's in this case we would produce just the old code and semantic. If some 'let' would be added to f, we would know what part of the namespace of f should be used to construct the closure of g. In absence of import* and exec we could use the current fast approach to implement nested scopes, if they are there we would know what vars should be stored in cells and passed down to inner scopes. [We could have special locals dicts that can contain direct values or cells, and that would do the right indirect get and set for the cell-case. These dict could also be possibly returned by "locals()" and that would be the way to implement exec "spam", just equivalently as exec "spam" in globals(),locals(). import * would have just the assignement semantic. ] Very likely I'm missing something, but from my "external" viewpoint I would have preferred such solution. IMO maybe it would be good to think about this, because differently as expected implicit scoping has consequences that we would better avoid. Is too late for that (having feature freeze)? regards, Samuele Pedroni.
How about the old fallback to using straight dict lookups when this combination of features is detected?
I'm posting an opinion on this subject because I'm implementing nested scopes in jython.
It seems that we really should avoid breaking code using import * and exec, and to obtain this - I agree - the only way is to fall back to some straight dictionary lookup, when both import or exec and nested scopes are there
But doing this AFAIK related to actual python nested scope impl and what I'm doing on jython side is quite messy, because we will need to keep around "chained" closures as entire dictionaries, because we don't know if an exec or import will hide some variable from an outer level, or add a new variable that then cannot be interpreted as a global one in nested scopes. This is IMO too much heavyweight.
Another way is to use special rules (similar to those for class defs), e.g. having
<frag> y=3 def f(): exec "y=2" def g(): return y return g()
print f() </frag>
# print 3.
Is that confusing for users? maybe they will more naturally expect 2 as outcome (given nested scopes).
This seems the best compromise to me. It will lead to the least broken code, because this is the behavior that we had before nested scopes! It is also quite easy to implement given the current implementation, I believe. Maybe we could introduce a warning rather than an error for this situation though, because even if this behavior is clearly documented, it will still be confusing to some, so it is better if we outlaw it in some future version. --Guido van Rossum (home page: http://www.python.org/~guido/)
Guido> Sigh indeed.... Guido> How about the old fallback to using straight dict lookups when Guido> this combination of features is detected? This probably won't be a very popular suggestion, but how about pulling nested scopes (I assume they are at the root of the problem) until this can be solved cleanly? Skip
This probably won't be a very popular suggestion, but how about pulling nested scopes (I assume they are at the root of the problem) until this can be solved cleanly?
Agreed. While I think nested scopes are kinda cool, I have lived without them, and really without missing them, for years. At the moment the cure appears worse then the symptoms in at least a few cases. If nothing else, it compromises the elegant simplicity of Python that drew me here in the first place! Assuming that people really _do_ want this feature, IMO the bar should be raised so there are _zero_ backward compatibility issues. Mark.
On Wed, Feb 21, 2001 at 01:58:18PM +1100, Mark Hammond wrote:
Assuming that people really _do_ want this feature, IMO the bar should be raised so there are _zero_ backward compatibility issues.
Even at the cost of additional implementation complexity? At the cost of having to learn "scopes are nested, unless you do these two things in which case they're not"? Let's not waffle. If nested scopes are worth doing, they're worth breaking code. Either leave exec and from..import illegal, or back out nested scopes, or think of some better solution, but let's not introduce complicated backward compatibility hacks. --amk
Even at the cost of additional implementation complexity?
I can only assume you are serious. IMO, absolutely!
Let's not waffle.
Agreed. IMO we are starting to waffle the minute we ignore backwards compatibility. If a new feature can't be added without breaking code that was not previously documented as illegal, then IMO it is simply a non-starter until Py3k. Indeed, I seem to recall many worthwhile features being added to the Py3k bit-bucket for exactly that reason. Mark.
"AMK" == Andrew Kuchling
writes:
AMK> On Wed, Feb 21, 2001 at 01:58:18PM +1100, Mark Hammond wrote:
Assuming that people really _do_ want this feature, IMO the bar should be raised so there are _zero_ backward compatibility issues.
AMK> Even at the cost of additional implementation complexity? At AMK> the cost of having to learn "scopes are nested, unless you do AMK> these two things in which case they're not"? AMK> Let's not waffle. If nested scopes are worth doing, they're AMK> worth breaking code. Either leave exec and from..import AMK> illegal, or back out nested scopes, or think of some better AMK> solution, but let's not introduce complicated backward AMK> compatibility hacks. Well said. Jeremy
On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote:
Let's not waffle. If nested scopes are worth doing, they're worth breaking code.
I'm sorry, but that's bull -- I mean, I disagree completely. Nested scopes *are* a nice feature, but if we can't do them without breaking code in weird ways, we shouldn't, or at least *not yet*. I am still uneasy by the restrictions seemingly created just to facilitate the implementation issues of nested scopes, but I could live with them if they had been generating warnings at least one release, preferably more. I'm probably more conservative than most people here, in that aspect, but I believe I am right in it ;) Consider the average Joe User attempting to upgrade. He has to decide whether any of his scripts suffer from the upgrade, and then has to figure out how to fix them. In a case like Mark had, he is very likely to just give up and not upgrade, cursing Python while he's doing it. Now consider a site admin (which I happen to be,) who has to make that decision for all the people on the site -- which can be tens of thousands of people. There is no way he is going to test all scripts, he is lucky to know who even *uses* Python. He can probably live with a clean error that is an obvious fix; that's part of upgrading. Some weird error that doesn't point to a fix, and a weird, inconsequential fix in the first place isn't going to make him confident in upgrading. Now consider a distribution maintainer, who has to make that decision for potentially millions, many of which are site maintainers. He is not a happy camper. I was annoyed by the socket.socket() change in 2.0, but at least we could pretend 1.6 was a real release and that there was a lot of advance warning. In this case, however, we had several instances of the 'bug' in the standard library itself, which a lot of people use as code examples. I have yet to see a book or tutorial that lists from-foo-import-* in a local scope as illegal, and I have yet to see *anything* that lists 'exec' (not 'in' something) in a local scope as illegal. Nevertheless, those two will seem to be breaking the code now.
Either leave exec and from..import illegal, or back out nested scopes, or think of some better solution, but let's not introduce complicated backward compatibility hacks.
We already *have* complicated backward compatibility hacks, though they are
masked as optimizations now. from-foo-import-* and exec are legal in a
function scope as long as you don't have a nested scope that references a
non-local name.
--
Thomas Wouters
"TW" == Thomas Wouters
writes:
TW> On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote:
Let's not waffle. If nested scopes are worth doing, they're worth breaking code.
TW> I'm sorry, but that's bull -- I mean, I disagree TW> completely. Nested scopes *are* a nice feature, but if we can't TW> do them without breaking code in weird ways, we shouldn't, or at TW> least *not yet*. I am still uneasy by the restrictions seemingly TW> created just to facilitate the implementation issues of nested TW> scopes, but I could live with them if they had been generating TW> warnings at least one release, preferably more. A note of clarification seems important here: The restrictions are not being introduced to simplify the implementation. They're being introduced because there is no sensible meaning for code that uses import * and nested scopes with free variables. There are two possible meanings, each plausible and neither satisfying. Jeremy
On Wed, Feb 21, 2001 at 09:56:40AM -0500, Jeremy Hylton wrote:
A note of clarification seems important here: The restrictions are not being introduced to simplify the implementation. They're being introduced because there is no sensible meaning for code that uses import * and nested scopes with free variables. There are two possible meanings, each plausible and neither satisfying.
I disagree. There are several ways to work around them, or the BDFL could
just make a decision on what it should mean. The decision between using a
local vrbl in an upper scope or a possible global is about as arbritrary as
what 'if key in dict:' and 'for key in dict' should do. I personally think
it should behave exactly like:
def outer(x, y):
a = ...
from module import *
def inner(x, y, z=a):
...
used to behave (before it became illegal.) That also makes it easy to
explain to people who already know the rule.
A possibly more practical solution would be to explicitly require a keyword
to declare vrbls that should be taken from an upper scope rather than the
global scope. Or a new keyword to define a closure. (def closure NAME():
comes to mind.) Lots of alternatives available if the implementation of
PEP227 can't be done without introducing backwards incompatibility and
strange special cases.
Because you have to admit (even though it's another hypothetical howl) that
it is odd that a function would *stop functioning* when you change a
lambda (or nested function) to use a closure, rather than the old hack:
def inner(x):
exec ...
myprint = sys.stderr.write
spam = lambda x, myprint=myprint: myprint(x*100)
I don't *just* object to the backwards incompatibility, but also to the
added complexity and the strange special cases, most of which were
introduced (at my urging, I'll readily admit and for which I should and do
appologize) to reduce the impact of the incompatibility. I do not believe
the ability to leave out the default-argument-hack (if you don't use
import-*/exec in the same function) is worth all that.
--
Thomas Wouters
[Thomas W]
appologize) to reduce the impact of the incompatibility. I do not believe the ability to leave out the default-argument-hack (if you don't use import-*/exec in the same function) is worth all that.
Ironically, I _fixed_ my original problem by _adding_ a default-argument-hack. This meant my lambda no longer used a global name but a local one. Well, I think it ironic anyway :) For the record, the only reason I had to use exec in that case was because the "new" module is not capable creating a new method. Trying to compile a block of code with a "return" statement but no function decl (to create a code object suitable for a method) fails at compile time. Like-sands-through-the-hourglass ly, Mark.
"MH" == Mark Hammond
writes:
MH> [Thomas W]
appologize) to reduce the impact of the incompatibility. I do not believe the ability to leave out the default-argument-hack (if you don't use import-*/exec in the same function) is worth all that.
MH> Ironically, I _fixed_ my original problem by _adding_ a MH> default-argument-hack. This meant my lambda no longer used a MH> global name but a local one. MH> Well, I think it ironic anyway :) I think it's ironic, too! I laughed when I read your message. MH> For the record, the only reason I had to use exec in that case MH> was because the "new" module is not capable creating a new MH> method. Trying to compile a block of code with a "return" MH> statement but no function decl (to create a code object suitable MH> for a method) fails at compile time. For the record, I realize that there is no reason for the compiler to complain about the code you wrote. If exec supplies an explicit namespace, then everything is hunky-dory. Assuming Guido agrees, I'll fix this ASAP. Jeremy
Trying to compile a block of code with a "return" statement but no function decl (to create a code object suitable for a method) fails at compile time.
Maybe you could add a dummy function header, compile that, and extract the code object from the resulting function object? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+
For the record, the only reason I had to use exec in that case was because the "new" module is not capable creating a new method. Trying to compile a block of code with a "return" statement but no function decl (to create a code object suitable for a method) fails at compile time.
I don't understand. Methods do have a function declaration: class C: def meth(self): pass Or am I misunderstanding? --Guido van Rossum (home page: http://www.python.org/~guido/)
[Guido]
I don't understand. Methods do have a function declaration:
class C:
def meth(self): pass
Or am I misunderstanding?
The problem is I have a class object, and the source-code for the method body as a string, generated at runtime based on runtime info from the reflection capabilities of the system we are interfacing to. The simplest example is for method code of "return None". I dont know how to get a code object for this snippet so I can use the new module to get a new method object. Attempting to compile this string gives a syntax error. There was some discussion a few years ago that adding "function" as a "compile type" may be an option, but I never progressed it. So my solution is to create a larger string that includes the method declaration, like: """def foo(self): return None """ exec that, get the function object out of the exec'd namespace and inject it into the class. Mark.
[Guido]
I don't understand. Methods do have a function declaration:
class C:
def meth(self): pass
Or am I misunderstanding?
[Mark]
The problem is I have a class object, and the source-code for the method body as a string, generated at runtime based on runtime info from the reflection capabilities of the system we are interfacing to. The simplest example is for method code of "return None".
I dont know how to get a code object for this snippet so I can use the new module to get a new method object. Attempting to compile this string gives a syntax error. There was some discussion a few years ago that adding "function" as a "compile type" may be an option, but I never progressed it.
So my solution is to create a larger string that includes the method declaration, like:
"""def foo(self): return None """
exec that, get the function object out of the exec'd namespace and inject it into the class.
Aha, I see. That's how I would have done it too. I admit that it's attractive to exec this in the local namespace and then simply use the local variable 'foo', but that doesn't quite work, so 'exec...in...' is the right thing to do anyway. --Guido van Rossum (home page: http://www.python.org/~guido/)
On Wed, Feb 21, 2001 at 09:56:40AM -0500, Jeremy Hylton wrote:
A note of clarification seems important here: The restrictions are not being introduced to simplify the implementation. They're being introduced because there is no sensible meaning for code that uses import * and nested scopes with free variables. There are two possible meanings, each plausible and neither satisfying.
I disagree. There are several ways to work around them, or the BDFL could just make a decision on what it should mean.
Since import * is already illegal according to the reference manual, that's an easy call: I pronounce that it's illegal. For b/w compatibility we'll try to allow it in as many situations as possible where it's not ambiguous.
I don't *just* object to the backwards incompatibility, but also to the added complexity and the strange special cases, most of which were introduced (at my urging, I'll readily admit and for which I should and do appologize) to reduce the impact of the incompatibility. I do not believe the ability to leave out the default-argument-hack (if you don't use import-*/exec in the same function) is worth all that.
The strange special cases should not remain a permanent wart in the language; rather, import * in functions should be considered deprecated. In 2.2 we should issue a warning for this in most cases. (Is there as much as a hassle with exec? IMO exec without an in-clause should also be deprecated.) --Guido van Rossum (home page: http://www.python.org/~guido/)
"MH" == Mark Hammond
writes:
This probably won't be a very popular suggestion, but how about pulling nested scopes (I assume they are at the root of the problem) until this can be solved cleanly?
MH> Agreed. While I think nested scopes are kinda cool, I have MH> lived without them, and really without missing them, for years. MH> At the moment the cure appears worse then the symptoms in at MH> least a few cases. If nothing else, it compromises the elegant MH> simplicity of Python that drew me here in the first place! Mark, I'll buy that you're suffering at the moment, but I'm not sure why. You have a lot of code that uses 'from ... import *' inside functions. If so, that's the source of the compatibility problem. If you had a tool that listed all the code that needed to be fixed and/or you got tracebacks that highlighted the offending line rather than some import, would you still be suffering? It sounds like the problem wouldn't be much harder then multi-argument append at that point. I also disagree strongly with the argument that nested scopes compromise the elegent simplicity of Python. Did you really look at Python and say, "None of those stinking scoping rules. Let me at it." <wink> I think the new rules are different, but no more or less complex than the old ones. Jeremy
[Jeremy]
I'll buy that you're suffering at the moment, but I'm not sure why.
I apologize if I sounded antagonistic.
You have a lot of code that uses 'from ... import *' inside functions. If so, that's the source of the compatibility problem. If you had a tool that listed all the code that needed to be fixed and/or you got tracebacks that highlighted the offending line rather than some import, would you still be suffering?
The point isn't about my suffering as such. The point is more that python-dev owns a tiny amount of the code out there, and I don't believe we should put Python's users through this. Sure - I would be happy to "upgrade" all the win32all code, no problem. I am also happy to live in the bleeding edge and take some pain that will cause. The issue is simply the user base, and giving Python a reputation of not being able to painlessly upgrade even dot revisions.
It sounds like the problem wouldn't be much harder then multi-argument append at that point.
Yup. I changed my code in relative silence on that issue, but believe we should not have been so hasty. Now we have warnings, I believe that would have been handled slightly differently if done today. It also had existing documentation to back it. Further, I believe that issue has contributed to a "no painless upgrade" perception already existing in some people's minds.
I also disagree strongly with the argument that nested scopes compromise the elegent simplicity of Python. Did you really look at Python and say, "None of those stinking scoping rules. Let me at it." <wink> I think the new rules are different, but no more or less complex than the old ones.
exec and eval take 2 dicts - there were 2 namespaces. I certainly have missed nested scopes, but instead of "let me at it", I smiled at the elegance and simplicity it buys me. I also didn't have to worry about "namespace clashes", and obscure rules. I wrote code the way I saw fit at the time, and didn't have to think about scoping rules. Even if we ignore existing code breaking, it is almost certain I would have coded the function the same way, got the syntax error, tried to work out exactly what it was complaining about, and adjust my code accordingly. Python is generally bereft of such rules, and all the more attractive for it. So I am afraid my perception remains. That said, I am not against nested scopes as Itrust the judgement of people smarter than I. However, I am against code breakage that is somehow "good for me", and suspect many other Python users are too. Just-one-more-reason-why-I-aint-the-BDFL-<wink> ly, Mark. Mark.
mark wrote:
Agreed. While I think nested scopes are kinda cool, I have lived without them, and really without missing them, for years.
in addition, it breaks existing code, all existing books, and several tools. doesn't sound like it really belongs in a X.1 release... maybe it should be ifdef'ed out, and not switched on by default until we reach 3.0? Cheers /F
Fredrik> maybe it should be ifdef'ed out, and not switched on by default Fredrik> until we reach 3.0? I think that's a very reasonable path to take. Skip
I did a brief review of three Python projects to see how they use import * and exec and to assess how much code will break in these projects. Project Python files Lines of import * exec illegal Python code in func in func exec Python 1127 113443 4? <57 0 Zope2 469 71370 0 15 1 PyXPCOM 26 2611 0 1 1 (excluding comment lines) The numbers are a little rough for Python, because I think I've fixed all the problems. As I recall, there were four instances of import * being using in a function. I think two of those would still be flagged as errors, while two would be allowed under the current rules (only barred when the current func contains another that has free variables). There is one illegal exec in Zope and one in PyXPCOM as Mark well knows. That makes a total of 4 fixes in almost 200,000 lines of code. These fixes should be pretty easy. The code won't compile until it's fixed. One could imagine many worse problems, like code that runs but has a different meaning. I should be able to fix the tracebacks so they indicate the source of the problem more clearly. I also realized that the exec rule is still too string. If the exec statement passes an explicit namespace -- "exec in foo" -- then there shouldn't be any problem, because the executed code can't affect the current namespace. If this form is allowed, the exec errors in xpcom and Zope disappear. It would be instructive to hear if the data would look different if I chose different projects. Perhaps the particular examples I chose are simply examples of excellent coding style by master programmers. Jeremy
Jeremy> That makes a total of 4 fixes in almost 200,000 lines of code. Jeremy> These fixes should be pretty easy. Jeremy, Pardon my bluntness, but I think you're missing the point. The fact that it would be easy to make these changes for version N+1 of package XYZ ignores the fact that users of XYZ version N may want to upgrade to Python 2.1 for whatever reason, but can't easily upgrade to XYZ version N+1. Maybe they need to pay an upgrade fee. Maybe they include XYZ in another product and can't afford to run too far ahead of their clients. Maybe XYZ is available to them only as bytecode. Maybe there's just too darn much code to pore through and retest. Maybe ... I've rarely found it difficult to fix compatibility problems in isolation. It's the surrounding context that gets you. Skip
"SM" == Skip Montanaro
writes:
Jeremy> That makes a total of 4 fixes in almost 200,000 lines of Jeremy> code. These fixes should be pretty easy. SM> Jeremy, SM> Pardon my bluntness, but I think you're missing the point. I don't mind if you're blunt :-). SM> I've rarely found it difficult to fix compatibility problems in SM> isolation. It's the surrounding context that gets you. I appreciate that there are compatibility problems, although I'm hard pressed to quantify them to any extent. My employer still uses Python 1.5.2 because of perceived compatibility problems, although I use Zope with 2.1 on my machine. Any change we make to Python that introduces incompatibilties is going to make it hard for some people to upgrade. When we began work on the 2.1 alpha cycle, I have the impression that we decided that some amount of incompatibility is acceptable. I think PEP 227 is the chief incompatibility, but there are other changes. For example, the warnings framework now spits out messages to stderr; I imagine this could be unacceptable in some situtations. The __all__ change might cause problems for some code, as we saw with the pickle module. The format of exceptions has changed in some cases, which makes trouble for users of doctest. I'll grant you that there is are differences in degree among these various changes. Nonetheless, any of them could be a potential roadblock for upgrading. There were a bunch more in 2.0. (Sidenote: If you haven't upgraded to 2.0 yet, then you can jump right to 2.1 when you finally do.) The recent flurry of discussion was generated by a single complaint about the exec problem. It appeared to me that this was the last straw for many people, and you, among others, suggested today that we delay nested scopes. This surprised me, because the problem was much shallower than some of the other compatibility issues that had been discussed earlier, including the one attributed to you in the PEP. If I understand correctly, though, you are objecting to any changes that introduce backwards compatibility. The fact that recent discussion prompted you to advocate this is coincidental. The question, then, is whether some amount of incompatible change is acceptable in the 2.1 release. I don't think the specific import */exec issues have anything to do with it, because if they didn't exist there would still be compatibility issues. Jeremy
Jeremy> The question, then, is whether some amount of incompatible Jeremy> change is acceptable in the 2.1 release. I think of 2.1 as a minor release. Minor releases generally equate in my mind with bug fixes, not significant functionality changes or potential compatibility problems. I think many other people feel the same way. Earlier this month I suggested that adopting a release numbering scheme similar to that used for the Linux kernel would be appropriate. Perhaps it's not so much the details of the numbering as the up-front statement of something like, "version numbers like x.y where y is even represent stable releases" or, "backwards incompatibility will only be introduced when the major version number is incremented". It's more that there is a statement about stability vs new features that serves as a published committment the user community can rely on. After all the changes that made it into 2.0, I don't think anyone to have to address compatibility problems with 2.1. Skip
"SM" == Skip Montanaro
writes:
Jeremy> The question, then, is whether some amount of incompatible Jeremy> change is acceptable in the 2.1 release. SM> I think of 2.1 as a minor release. Minor releases generally SM> equate in my mind with bug fixes, not significant functionality SM> changes or potential compatibility problems. I think many other SM> people feel the same way. Fair enough. It sounds like you are concerned, on general grounds, about incompatible changes and the specific exec/import issues aren't any more or less important than the other compatibility issues. I don't think I agree with you, but I'll sit on it for a few days and see what real problem reports there are. thinking-there-will-be-lots-to-talk-about-at-the-conference-ly y'rs, Jeremy
Jeremy> The question, then, is whether some amount of incompatible Jeremy> change is acceptable in the 2.1 release.
I think of 2.1 as a minor release. Minor releases generally equate in my mind with bug fixes, not significant functionality changes or potential compatibility problems. I think many other people feel the same way.
Hm, I disagree. Remember, back in the days of Python 1.x, we introduced new stuff even with micro releases (1.5.2 had a lot of stuff that 1.5.1 didn't). My "feel" for Python version numbers these days is that the major number only needs to be bumped for very serious reasons. We switched to 2.0 mostly for PR reasons, and I hope we can stay at 2.x for a while. Pure bugfix releases will have a 3rd numbering level; in fact there will eventually be a 2.0.1 release that fixes bugs only (including the GPL incompatibility bug in the license!). 2.x versions can introduce new things. We'll do our best to keep old code from breaking unnecessarily, but I don't want our success to stand in the way of progress, and I will allow some things to break occasionally if it serves a valid purpose. You may consider this a break with tradition -- so be it. If 2.1 really breaks too much code, we will fix the policy for 2.2, and do our darndest to fix the code in 2.1.1.
Earlier this month I suggested that adopting a release numbering scheme similar to that used for the Linux kernel would be appropriate.
Please no! Unless you make a living hacking Linux kernels, it's too hard to remember which is odd and which is even, because it's too arbitrary.
Perhaps it's not so much the details of the numbering as the up-front statement of something like, "version numbers like x.y where y is even represent stable releases" or, "backwards incompatibility will only be introduced when the major version number is incremented". It's more that there is a statement about stability vs new features that serves as a published committment the user community can rely on. After all the changes that made it into 2.0, I don't think anyone to have to address compatibility problems with 2.1.
I don't want to slide into version number inflation. There's not enough new in 2.1 to call it 3.0. --Guido van Rossum (home page: http://www.python.org/~guido/)
Guido wrote:
Hm, I disagree. Remember, back in the days of Python 1.x, we introduced new stuff even with micro releases (1.5.2 had a lot of stuff that 1.5.1 didn't).
Last year, we upgraded a complex system from 1.2 to 1.5.2. Two modules broke; one didn't expect exceptions to be instances, and one messed up under the improved module cleanup model. We recently upgraded another major system from 1.5.2 to 2.0. It was a much larger undertaking; about 50 modules were affected. And six months after 2.0, we'll end up with yet another incompatible version... As a result, we end up with a lot more versions in active use, more support overhead, maintenance hell for extension writers (tried shipping a binary extension lately?), training headaches ("it works this way in 1.5.2 and 2.0 but this way in 2.1, but this works this way in 1.5.2 but this way in 2.0 and 2.1, and this works..."), and all our base are belong to cats.
2.x versions can introduce new things. We'll do our best to keep old code from breaking unnecessarily, but I don't want our success to stand in the way of progress, and I will allow some things to break occasionally if it serves a valid purpose.
But nested scopes breaks everything: books (2.1 will appear at about the same time as the first batch of 2.0 books), training materials, gurus, tools, and as we've seen, some code.
I don't want to slide into version number inflation. There's not enough new in 2.1 to call it 3.0.
Besides nested scopes, that is. I'm just an FL, but I'd leave them out of a release that follows only 6 months after a major release, no matter what version number we're talking about. Leave the new compiler in, and use it to warn about import/exec (can it detect shadowing too?), but don't make the switch until everyone's ready. Cheers /F
"SM" == Skip Montanaro
writes:
Guido> Sigh indeed.... It sounds like the real source of frusteration was the confusing error message. I'd rather fix the error message. Guido> How about the old fallback to using straight dict lookups Guido> when this combination of features is detected? Straight dict lookups isn't sufficient for most cases, because the question is one of whether to build a closure or not. def f(): from module import * def g(l): len(l) If len is not defined in f, then the compiler generates a LOAD_GLOBAL for len. If it is defined in f, then it creates a closure for g (MAKE_CLOSURE instead of MAKE_FUNCTION) generator a LOAD_DEREF for len. As far as I can tell, there's no trivial change that will make this work. SM> This probably won't be a very popular suggestion, but how about SM> pulling nested scopes (I assume they are at the root of the SM> problem) until this can be solved cleanly? Not popular with me <0.5 wink>, but only because I don't there this is a problem that can be "solved" cleanly. I think it's far from obvious what the code example above should do in the case where module defines the name len. Posters of c.l.py have suggested both alternatives as the logical choice: (1) import * is dynamic so the static scoping rule ignores the names it introduces, (2) Python is a late binding language so the name binding introduced by import * is used. Jeremy
participants (10)
-
Andrew Kuchling
-
Fredrik Lundh
-
Fredrik Lundh
-
Greg Ewing
-
Guido van Rossum
-
Jeremy Hylton
-
Mark Hammond
-
Samuele Pedroni
-
Skip Montanaro
-
Thomas Wouters