Hi.
[Mark Hammond]
> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
>
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> cause.
>
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
[GvR]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> >
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> >
> > print f()
> > </frag>
> >
> > # print 3.
> >
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
>
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
>
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
>
Yes this can be easy to implement but more confusing situations can arise:
<frag>
y=3
def f():
y=9
exec "y=2"
def g():
return y
return y,g()
print f()
</frag>
What should this print? the situation leads not to a canonical solution
as class def scopes.
or
<frag>
def f():
from foo import *
def g():
return y
return g()
print f()
</frag>
[Mark Hammond]
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
>
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
>
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
confusion.
[Andrew Kuchling]
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
>
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
>
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
> So when the marshalled representation of 0.001 is loaded under
> "german" LC_NUMERIC here, we get back exactly 0.0. I'm not
> sure why.
When I call "marshal.dumps(0.1)" from AsyncDialog (or anywhere in the
Outlook code) I get "f\x030.0", which fits with what you have.
> So the obvious <wink> answers are:
(Glad you posted this - I was wading through the progress of marshalling
(PyOS_snprintf etc) and getting rapidly lost).
> 1. When LC_NUMERIC is "german", MS C's atof() stops at the first
> period it sees.
This is the case:
"""
#include <locale.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
float f;
setlocale(LC_NUMERIC, "german");
f = atof("0.1");
printf("%f\n", f);
}
"""
Gives me with gcc version 3.2 20020927 (prerelease):
0.100000
Gives me with Microsoft C++ Builder (I don't have Visual C++ handy, but
I suppose it would be the same):
0,00000
The help file for Builder does say that this is the correct behaviour -
it will stop when it finds an unrecognised character - here '.' is
unrecognised (because we are in German), so it stops.
Does this then mean that this is a Python bug? Or because Python tells
us not to change the c locale and we (Outlook) are, it's our
fault/problem?
Presumably what we'll have to do for a solution is just what Mark is
doing now - find the correct place to put a call that (re)sets the c
locale to English.
=Tony Meyer
On behalf of the Python development team and the Python community, I'm
happy to announce the release of Python 2.3 (final).
Nineteen months in the making, Python 2.3 represents a commitment to
stability and improved performance, with a minimum of new language
features. Countless bugs and memory leaks have been fixed, many new
and updated modules have been added, and the new type/class system
introduced in Python 2.2 has been significantly improved. Python 2.3
can be up to 30% faster than Python 2.2.
For more information on Python 2.3, including download links for
various platforms, release notes, and known issues, please see:
http://www.python.org/2.3
Highlights of this new release include:
- A brand new version of IDLE, the Python IDE, from the IDLEfork
project at SourceForge.
- Many new and improved library modules including: sets, heapq,
datetime, textwrap, optparse, logging, bsddb, bz2, tarfile,
ossaudiodev, itertools, platform, csv, timeit, shelve,
DocXMLRPCServer, imaplib, imp, trace, and a new random number
generator based on the highly acclaimed Mersenne Twister algorithm
(with a period of 2**19937-1). Some obsolete modules have been
deprecated.
- New and improved built-ins including:
o enumerate(): an iterator yielding (index, item) pairs
o sum(): a new function to sum a sequence of numbers
o basestring: an abstract base string type for str and unicode
o bool: a proper type with instances True and False
o compile(), eval(), exec: fully support Unicode, and allow input
not ending in a newline
o range(): support for long arguments (magnitude > sys.maxint)
o dict(): new constructor signatures
o filter(): returns Unicode when the input is Unicode
o int() can now return long
o isinstance(), super(): Now support instances whose type() is not
equal to their __class__. super() no longer ignores data
descriptors, except for __class__.
o raw_input(): can now return Unicode objects
o slice(), buffer(): are now types rather than functions
- Many new doctest extensions, allowing them to be run by unittest.
- Extended slices, e.g. "hello"[::-1] returns "olleh".
- Universal newlines mode for reading files (converts \r, \n and \r\n
all into \n).
- Source code encoding declarations. (PEP 263)
- Import from zip files. (PEP 273 and PEP 302)
- FutureWarning issued for "unsigned" operations on ints. (PEP 237)
- Faster list.sort() is now stable.
- Unicode filenames on Windows. (PEP 227)
- Karatsuba long multiplication (running time O(N**1.58) instead of
O(N**2)).
- pickle, cPickle, and copy support a new pickling protocol for more
efficient pickling of (especially) new-style class instances.
- The socket module now supports optional timeouts on all operations.
- ssl support has been incorporated into the Windows installer.
- Many improvements to Tkinter.
Python 2.3 contains many other improvements, including the adoption of
many Python Enhancement Proposals (PEPs). For details see:
http://www.python.org/2.3/highlights.html
Enjoy.
happy-50th-birthday-geddy-ly y'rs,
-Barry
Barry Warsaw
barry(a)python.org
Python 2.3 Release Manager
(and the PythonLabs team: Tim, Fred, Jeremy, and Guido)
I'd like to do a 2.3b1 release someday. Maybe at the end of next
week, that would be Friday April 25. If anyone has something that
needs to be done before this release go out, please let me know!
Assigning a SF bug or patch to me and setting the priority to 7 is a
good way to get my attention.
--Guido van Rossum (home page: http://www.python.org/~guido/)
I'm working on techniques to automatically identify problematic
regular expressions where carefully chosen inputs can cause a matcher
to run for a long time.
I need testcases, so I'm looking for Python software that that is
widely used and also uses lots of regular expressions. Can anyone
offer any suggestions of what I should look at? I'm also looking for
Perl software.
Thanks
Scott
I'm not on the list, so please CC me any replies.
Barry & Ken have run completely out of round tuits on which are written
"python-mode". I believe Barry indicated privately that he is going to set
up a SF project to allow other folks to contribute (maybe he's just going to
give Python CVS developer privileges to anyone, he may be just that giddy
after the 2.3 release). I have very little time as well and have very old,
feeble Emacs Lisp skills to boot. Hell, I can't even get my Python files to
save properly with tramp these days, and I never did like mapcar. I'm sure
Barry will announce here and probably on c.l.py once the project's
available, but in anticipation of that, if you are an Emacs or XEmacs person
who uses python mode, please consider getting reacquainted with your inner
functional self. Your parens will be so proud.
Skip
Michael wrote:
> I don't object to a syntax for function attributes... in fact, I've seen no
> specific proposal to object to. But I find your point above HIGHLY
> unconvincing:
I agree that
def sqr(x):
return x*x
sqr.inline = true
is about the same length as
def sqr [inline=true](x):
return x*x
But when this function is 60 lines long rather than 2 lines long, the
association is broken. This is my major objection, not the number
of characters I write, but rather the number of lines I must scan.
People wouldn't be as likely to use doc strings if the syntax was:
def function(x):
code
code code code
function.__doc__ = "this is my doc string"
> Notice that this is the approach that Guido used for classmethod and
> its ilk, and we're still working out what the right "special syntax"
I use the classmethod call because its the only (good) way to do
the task, but I have the same objection. When I write
class Foo:
def method(self, ...):
...
method = classmethod(method)
Upon reading, I don't find out the "true meaning" of the method until I
happen upon the classmethod call.
Even in C++ with its bogus syntax, I can identify a class method at the
point of definition
class Foo {
static void method(...);
...
}
so, it would be nice if there was a way to densely associate the
information at the point of function definition just as the doc
string is densely associated with the function definition.
class Foo:
def method [classmethod=true](self, ...):
"doc string"
def get_x [property="x"](self):
return self.__x
--
Patrick Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller
The more original a discovery, the more obvious it seems afterward.
-- Arthur Koestler
I notice that parts of the trunk are reporting themselves as Python 2.4, but
other key parts for Windows are still Python 2.3 - eg, the DLL name, the
PC\*.rc files, etc.
Should I check code in to move everything to a 2.4 world on the trunk?
And while I am asking, I am a little uncertain about the CVS tags. Can
someone explain the distinction between release23-branch and release23-fork
for me? Is there a branch that will track the "latest 2.3" for the next few
years (ie, through 2.3.1, 2.3.2 etc?)
Thanks,
Mark.
Hi,
I'm having a deadlock on import in my embedded python 2.3rc2, on
Win2000, built with Visual C++ 7.1.
Having spent a fair amount of time going through the code, and
various threads, it is still not clear to me if I'm hitting the
problem described in the thread:
http://mail.python.org/pipermail/python-dev/2003-February/033436.html
or if I'm triggering it because of something else alltogether, as
I'm only seeing the problem on Win2000, on Linux it works fine.
Basically my code does this in one C level thread:
PyGILState_STATE _tstate = PyGILState_Ensure ();
PyObject* usermod = PyImport_ImportModule ("echo");
Where echo.py is just:
print "ECHO"
import time
time.sleep(1)
print "DONE"
It never prints out "DONE". If I take away the sleep(), it
finishes, printing DONE.
running with THREADDEBUG=15, I'm getting the output below, and the
last two lines leave me utterly puzzled, as if something would be
quite wrong on my machine (Win2000 under VmWare Linux host).
Why could the same thread be unable reacquire a lock it just held?
PyThread_init_thread called
1084: PyThread_allocate_lock() -> 01647578
1084: PyThread_acquire_lock(01647578, 1) called
1084: PyThread_acquire_lock(01647578, 1) -> 1
1084: PyThread_release_lock(01647578) called
1084: PyThread_acquire_lock(01647578, 1) called
1084: PyThread_acquire_lock(01647578, 1) -> 1
1084: PyThread_release_lock(01647578) called
PyThread_allocate_lock called
1084: PyThread_allocate_lock() -> 0164A280
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
PyThread_allocate_lock called
1084: PyThread_allocate_lock() -> 0256DD40
1084: PyThread_acquire_lock(0256DD40, 1) called
1084: PyThread_acquire_lock(0256DD40, 1) -> 1
1084: PyThread_release_lock(0256DD40) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
PyThread_allocate_lock called
1084: PyThread_allocate_lock() -> 01648DA8
1084: PyThread_acquire_lock(01648DA8, 1) called
1084: PyThread_acquire_lock(01648DA8, 1) -> 1
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_release_lock(01648DA8) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
1084: PyThread_release_lock(0164A280) called
1084: PyThread_acquire_lock(0164A280, 0) called
1084: PyThread_acquire_lock(0164A280, 0) -> 1
ECHO
1084: PyThread_release_lock(01648DA8) called
1084: PyThread_acquire_lock(01648DA8, 1) called
Harri
Pat Miller writes:
> I can imagine:
>
> def sqr [inline=true](x):
> return x*x
[...]
> I think it essential to use some dense syntax. It would be messy if it
> were
>
> def sqr(x):
> return x*x
> sqr.inline = true
I don't object to a syntax for function attributes... in fact, I've seen no
specific proposal to object to. But I find your point above HIGHLY
unconvincing:
Python 2.3 (#46, Jul 29 2003, 18:54:32) [MSC v.1200 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> version_1 = """def sqr [inline=true] (x):
... return x*x
... """
>>> version_2 = """def sqr(x):
... return x*x
... sqr.inline = true
... """
>>> len(version_2) - len(version_1)
4
Here's what WOULD convince me. Mark Nottingham writes:
> I would very much like to use function
> attributes for associating metadata with functions and methods, but the
> lack of such syntactic support precludes their use, so I end up
> (ab)using __doc__.
Rather than introducing the syntax first, let's start by making it
POSSIBLE somehow, even if the syntax is awkward. That's already been
done -- functions can have attributes. THEN, let's write some cool
code that uses feature, and we'll learn how it works out in practice.
Only once we see how very useful this really is should we consider
introducing special syntax for it, because the less special syntax
that Python has, the better it is (if we keep functionality constant).
Notice that this is the approach that Guido used for classmethod and
its ilk, and we're still working out what the right "special syntax"
is for those, but in the meantime we're seeing that they are getting
lots of use.
I think this approach (first prove it's useful for Python, THEN add
syntax to the language) is preferable for ALL new "special syntax"
except in three cases. One exception is features like generators,
which are instantly recognized by all as being enormously useful,
so we skipped the "whether to do it" stage and went right to the
discussion of _what_ the special syntax should be. Another exception
is features which simply CAN'T be done without introducing new
syntax... like PEP 310 where we prevent the race condition. And the
third exception is cases where the proponants claim that the syntax
change itself is what makes the difference -- for instance those
who point to ruby's blocks as very useful while Python's lambda's
have just a little bit too much typing. (Not that I agree, necessarily,
but they're entitled to make their point.)
Other than that, let's always try things out before adding special
syntax.
Now-my-fourth--no-AMONGST-my-exceptions-lly yours,
-- Michael Chermside