Hi.
[Mark Hammond]
> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
>
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> cause.
>
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
[GvR]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> >
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> >
> > print f()
> > </frag>
> >
> > # print 3.
> >
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
>
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
>
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
>
Yes this can be easy to implement but more confusing situations can arise:
<frag>
y=3
def f():
y=9
exec "y=2"
def g():
return y
return y,g()
print f()
</frag>
What should this print? the situation leads not to a canonical solution
as class def scopes.
or
<frag>
def f():
from foo import *
def g():
return y
return g()
print f()
</frag>
[Mark Hammond]
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
>
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
>
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
confusion.
[Andrew Kuchling]
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
>
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
>
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
Looking at a bug report Fred forwarded, I realized that after
py-howto.sourceforge.net was set up, www.python.org/doc/howto was
never changed to redirect to the SF site instead. As of this
afternoon, that's now done; links on www.python.org have been updated,
and I've added the redirect.
Question: is it worth blowing away the doc/howto/ tree now, or should
it just be left there, inaccessible, until work on www.python.org
resumes?
--amk
Hi.
Writing nested scopes support for jython (now it passes test_scope and
test_future <wink>),
I have come across these further corner cases for nested scopes mixed with
global decl,
I have tried them with python 2.1b1 and I wonder if the results are consistent
with
the proposed rule:
a free variable is bound according to the nearest outer scope binding
(assign-like or global decl),
class scopes (for backw-comp) are ignored wrt this.
(I)
from __future__ import nested_scopes
x='top'
def ta():
global x
def tata():
exec "x=1" in locals()
return x # LOAD_NAME
return tata
print ta()() prints 1, I believed it should print 'top' and a LOAD_GLOBAL
should have been produced.
In this case the global binding is somehow ignored. Note: putting a global decl
in tata xor removing
the exec make tata deliver 'top' as I expected (LOAD_GLOBALs are emitted).
Is this a bug or I'm missing something?
(II)
from __future__ import nested_scopes
x='top'
def ta():
x='ta'
class A:
global x
def tata(self):
return x # LOAD_GLOBAL
return A
print ta()().tata() # -> 'top'
should not the global decl in class scope be ignored and so x be bound to x in
ta,
resulting in 'ta' as output? If one substitutes global x with x='A' that's what
happens.
Or only local binding in class scope should be ignored but global decl not?
regards, Samuele Pedroni
[Oops, try again]
There's talk on the PythonMac-SIG to create an import hook that would
read modules with either \r, \n or \r\n newlines and convert them to
the local convention before feeding them to the rest of the import
machinery. The reason this has become interesting is the mixed
unixness/macness of MacOSX, where such an import hook could be used to
share a Python tree between MacPython and bsd-Python. They would only
need a different site.py (probably), living somehwere near the head of
sys.path, that would be in local end of line convention and enable the
hook.
However, it seem that such a module would have a much more general
scope, for instance if you're accessing samba partitions from windows,
or other foreign file systems, etc.
Does this sound like a good idea? And (even better:-) has anyone done
this already? Would it be of enough interest to include it in the
core Lib?
--
Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen(a)oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack | ++++ see http://www.xs4all.nl/~tank/ ++++
I understand the issue of "default Unicode encoding" is a loaded one,
however I believe with the Windows' file system we may be able to use a
default.
Windows provides 2 versions of many functions that accept "strings" - one
that uses "char *" arguments, and another using "wchar *" for Unicode.
Interestingly, the "char *" versions of function almost always support
"mbcs" encoded strings.
To make Python work nicely with the file system, we really should handle
Unicode characters somehow. It is not too uncommon to find the "program
files" or the "user" directory have Unicode characters in non-english
version of Win2k.
The way I see it, to fix this we have 2 basic choices when a Unicode object
is passed as a filename:
* we call the Unicode versions of the CRTL.
* we auto-encode using the "mbcs" encoding, and still call the non-Unicode
versions of the CRTL.
The first option has a problem in that determining what Unicode support
Windows 95/98 have may be more trouble than it is worth. Sticking to purely
ascii versions of the functions means that the worst thing that can happen
is we get a regular file-system error if an mbcs encoded string is passed on
a non-Unicode platform.
Does anyone have any objections to this scheme or see any drawbacks in it?
If not, I'll knock up a patch...
Mark.
I just got caught out by this:
"""
def foo():
pass
__all__ = [foo]
"""
Then at the interactive prompt:
>>> from foo import *
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: attribute name must be string
The problem is that __all__ contains a function object rather than a string
object. I had to use the debugger to determine why I was getting the
failure :( All you 2.1 veterans will immediately pick that it should read
'__all__ = ["foo"]'
Looking at the __all__ code:
if (skip_leading_underscores &&
PyString_Check(name) &&
PyString_AS_STRING(name)[0] == '_')
{
Py_DECREF(name);
continue;
}
value = PyObject_GetAttr(v, name);
PyObject_GetAttr explicitly handles string and unicode objects. However,
code here won't like Unicode that much :)
Would it make sense to a explicitly raise a more meaningful exception here
if __all__ doesnt contain strings?
Rebooting the thread...
While testing mxNumber, I discovered what looks like a bug in
Win95: both Thomas Heller and I are seeing a problem with
Python 2.1 when importing extension modules which rely on other
DLLs as library.
Interestingly, the problem only shows up when starting Python
from the installation directory. Looking at the imports using
python -vv shows that in this situation, Python tries to import
modules, packages, extensions etc. using *relative* paths.
Now, under Win98 there is no problem, but Win95 doesn't seem
to like these relative imports when it comes to .pyd files
which reference DLLs in the same directory. It doesn't have
any problems when Python is started outside the installation
dir, since in that case Python uses absolute paths for importing
modules and extensions.
Would it be hard to tweak Python into always using absolute search
paths during module import ?
--
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting: http://www.egenix.com/
Python Software: http://www.lemburg.com/python/
A confused user on c.l.py reported that while
for x in file.xreadlines():
works fine,
map(whatever, file.xreadlines())
blows up with
TypeError: argument 2 to map() must be a sequence object
The docs say both contexts require "a sequence", so this is baffling to them.
It's apparently because map() internally insists that the sq_length slot be
non-null (but it's null in the xreadlines object), despite that map() doesn't
use it for anything other than *guessing* a result size (it keeps going until
IndexError is raised regardless of what len() returns, growing or shrinking
the preallocated result list as needed).
I think that's a bug in map(). Anyone disagree?
If so, fine, map() has to be changed to work with iterators anyway <wink>.
How are we going to identify all the places that need to become
iterator-aware, get all the code changed, and update the docs to match? In
effect, a bunch of docs for arguments need to say, in some words or other,
that the args must implement the iterator interface or protocol. I think
it's essential that we define the latter only once. But the docs don't
really define any interfaces/protocols now, so it's unclear where to put
that.
Fred, Pronounce. Better sooner than later, else I bet a bunch of code
changes will get checked in without appropriate doc changes, and the 2.2 docs
won't match the code.
On Sat, 28 Apr 2001, Tim Peters <tim_one(a)users.sourceforge.net> wrote:
> Modified Files:
> bltinmodule.c
> Log Message:
> Fix buglet reported on c.l.py: map(fnc, file.xreadlines()) blows up.
> Also a 2.1 bugfix candidate (am I supposed to do something with those?).
> Took away map()'s insistence that sequences support __len__, and cleaned
> up the convoluted code that made it *look* like it really cared about
> __len__ (in fact the old ->len field was only *used* as a flag bit, as
> the main loop only looked at its sign bit, setting the field to -1 when
> IndexError got raised; renamed the field to ->saw_IndexError instead).
Can anyone give me his opinion about whether 2.0.1 should have this
bug fix? It's not just for file.xreadlines(): the older fileinput.fileinput()
is hurt by this as well...
--
"I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
-- Wichert Akkerman (on debian-private)| easier, more seductive.
For public key, finger moshez(a)debian.org |http://www.{python,debian,gnu}.org
> Nobody is in charge: if you know of a problem, please fix it. All
> the HTML stuff is under CVS control, and all Python project members
> have commit access for it, same as for everything else in the Python
> source tree; it's just under the nondist branch instead of the dist
> branch.
Ok, changed in CVS. Is the answer to SF-FAQ question 1.3 still
correct? That modified files have to manually uploaded to SF? That
answer does not mention nondist/sf-html at all...
Regards,
Martin