I've been heading down a twisted path that's
led me to run the curses regression test
on a build that's been forced to use the
curses has_key emulation code.
It fails on the "has_key(13)" line in the test,
and the has_key.py code looks questionable to me:
capability_name = _capability_names[ch]
if _curses.tigetstr(capability_name): return 1
else: return 0
The keys in the _capability_names dictionary are
taken from the curses keycodes and are in the
range octal 0400 to 0777, so the dictionary lookup
on a key of 13 takes a KeyError exception - which
is not caught, as can be seen above.
I am using PMW for creating a GUI. The input is given through XML file. A class parses through the XML file and creates the GUI using the PMW. The GUI consists of EntryWidgets and combo boxes. This is carried out at loading time. Now at runtime, when I change the value of these Entry Fields , how do I read these values? There are multiple XML files and hence multiple windows are created ..
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
I've been thinking about the possibility of an optimization of range()
calls in for-loops, replacing the list generation with a lazy range
xrange(), in Python 2.3, now has an iterator, so it should be faster
than range() in for-loops, which is an incentive to use it more.
However, this does go against the overall desire to move away from
using xrange(), just as we have moved away from xreadlines(), etc.
So, I've been considering possible ways in which to 'optimize' the
following types of loops:
for i in range( 1000 ):
n = 1000
for i in range( n ):
The main issue, as I see it, is substituting some form of lazy range
iterator, for the range() function, "behind the curtains" as it were.
People could gain the benefits of xrange() (less memory consumption,
probably faster looping), without having to promote the continued use
There would have to be logic to determine that the range function
being used is the builtin range, and not a shadowed version. I assume
this can be done in a fairly straightforward manner at compile time.
I could use advice on whether this is feasible (but see below, first)
A further issue is the substitution itself. As of 2.3b1 the range()
builtin is able to handle long integer objects larger than
sys.maxint. xrange() is not so equipped, so it cannot be substituted
blindly for range() without examining the range() arguments.
However, as the second example above shows, the value of the arguments
would have to be determined at runtime. In principle, a function
could be made which checked the arguments, and if they could be
handled by an xrange() object, it could be returned. Otherwise, the
results of range() could be returned. However, it is unclear whether
the results of this indirection (and double checking of the arguments)
would be a speed win (although the reduced memory requirements would
Guido has already stated his opinion that xrange should NOT be
extended to handle values larger than sys.maxint (ie. he doesn't want
to give xrange() any more features). A related option is to make a
kind of private lazy range-iterator object, would could be
unconditionally substituted for the range function, in for loops, as
compile time. Based on my understanding of how the code generation
works, this could be made to work in principle. However, without
introducing some new byte codes, it doesn't seem possible to have a
hidden, "private" iterator generator function.
So, without going into much further detail it seems there would be
only three feasible options.
1) Extend xrange() to handle long int arguments, to bring parity with
range(), and allowing unconditional substitution with range, in
2) Change byte code generation to allow substituting a private range
iterator (or range() iterator generator) that can handle all possible
3) Forget it. Not worth bothering about.
After thinking about it a lot, it seems number 3 has become the most
attractive (and likely) option. :) Number 2 seems like a maintenance
nightmare, and number 1 would require the BDFL and community to change
the stance on extending the feature set of xrange().
I admit I haven't done extensive testing to determine whether this
optimization would really provide much benefit on the whole.
Certainly a grep through any large Python code base (say the standard
libraries) indicates that there are many uses of range() in for loops,
but typically for fairly small ranges, so the memory issue may not be
such a problem. Also, should there ever be a Python 3, where range()
itself probably becomes lazy, the porting headaches, for this
particular issue, should be minimal compared to others. (ie. xrange()
would become a synonym, of sorts, for the new range())
Anyway, I only bring it up because it is an issue that does get
discussed from time to time, usually with the idea that it "could"
possibly be done in the future. I think if it should ever be done, it
should probably be done soon, just so that using xrange() will grow
less attractive, rather than more. I'm interested to hear other
thoughts about this.
Chad Netzer <cnetzer (at) sonic (dot) net>
I think there's a bug in the logging package.
It defines a setLevel() method on Handler objects, but I can't find any
explanation of what it does. I checked the code and calling setLevel()
sets an attribute that is never read.
I expected setLevel() to be a way to filter out messages below a certain
level. Is that the intended effect? If so, I'd be happy to fix it so
that it worked.
OK, I've sent the VC compiler wishlist to Nick Hodapp Microsoft.
I received 18 requests, two of which came after I had sent in the
list (sorry, guys). These folks will get their compiler:
Guido van Rossum
Paul F. Dubois
Martin v. Löwis
Gordon B McMillan
Nick said the order would be placed on July 1st to fall in the 2004
budget. Delivery will take approximately 2 weeks from then.
--Guido van Rossum (home page: http://www.python.org/~guido/)
After doing a whole heck of a lot of Java and Jython programming over the
last year I decided to work an idea of mine into a PEP after being
impressed with Java thread syncronization and frustrated with Python (it's
almost always the other way around...)
Comments, please send to me. I think python-dev is the right forum for
discussion, otherwise someone will surely let me know and I'll go to
Anyone want to take this on?
------- Forwarded Message
Date: Tue, 17 Jun 2003 15:18:07 -0400
From: Kevin Jacobs <jacobs(a)penguin.theopalgroup.com>
To: Guido van Rossum <guido(a)python.org>
Subject: Re: [Python-Dev] Py2.3 Todo List
I'll vote for applying patch #751916, which fixes some memory leaks in the
timeout code, and allows the SSL code to recover from keyboard interrupts.
I have several applications that work with 2.2.x that do not currently with
the 2.3 CVS due to this issue.
The OPAL Group - Enterprise Systems Architect
Voice: (216) 986-0710 x 19 E-mail: jacobs(a)theopalgroup.com
Fax: (216) 986-0714 WWW: http://www.theopalgroup.com
------- End of Forwarded Message
Guido van Rossum wrote:
>>It's still not the same as for str:
>> >>> "123".index("3", 0L, sys.maxint+1)
>>*>>> list("123").index("3", 0L, sys.maxint+1)
>>Traceback (most recent call last):
>> File "<stdin>", line 1, in ?
>>OverflowError: long int too large to convert to int
> Do you really care about such end cases?
At least str does.
"ii", &i, &j
"O&O&", _PyEval_SliceIndex, &i, _PyEval_SliceIndex, &j
in the PyArg_ParseTuple() call should fix it.
> It would be simple enough to introduce new-style exceptions if
> Exception were made a new-style class and at the same time all
> new-style exceptions were required to derive from Exception:
> raise x
> would check whether x was:
> - a string (but not an instance of a true subclass of str)
> - a classic class
> - an instance of a classic class
> - Exception or a subclass thereof
> - an instance of Exception or of a subclass thereof
> Where the first three cases are for backward compatibility.
> Similarly, the rule for
> raise x, y
> should allows x to be
> - a string
> - a classic class
> - Exception or a subclass thereof
> and in the last two cases, y could either be an instance of x (or of a
> subclass of x!), or an argument for x, or a tuple of arguments for x.
Okay, after hearing this (plus all the arguments about PEP 317
requiring an excessive level of migration pain), I am now convinced.
If the PEP winds up being officially rejected, I propose that it
grow a "rejection reasons" section explaing why, and that this section
also describe the above plan as the "plausible alternative" to PEP 317
for eventual migration to new-style exceptions.
-- Michael Chermside
Hi Jim -- are you still the prime suspect for zipfile.py? If so, could
you take a look at http://python.org/sf/755031 and let me know if I'm
onto something, or if zipfile.py is really in the right here?
Greg Ward <gward(a)python.net> http://www.gerg.ca/
Sure, I'm paranoid... but am I paranoid ENOUGH?