I think this decorator sould be included in itertools:
from functools import wraps
__slots__ = 'f', 'args', 'kwargs'
self.f = f
self.args = args
self.kwargs = kwargs
return self.f(*self.args, **self.kwargs)
Using this you can iterate over the return value of a generator decorated with reiter as often as
you like (and not just once):
for i in xrange(x):
g = gen(5,3)
for x in g:
sys.stdout.write('%r\n' % x)
for x in g:
sys.stdout.write('%r\n' % x)
The difference to tee is that old values are not remembered but the generator is evaluated when
__iter__() is called. This might come in handy when you implement methods like items(), values() and
keys() in a custom dict implementation.
Or is there already such a thing and I missed it?
PS: Maybe it should be called regen/ReGen instead of reiter/ReIter?
PPS: Funny thing, both "Reiter" and "Regen" are german words. Reiter = rider or tab; Regen = rain.
Huh? That makes no sense.
a = a  x
or anything like that. Language decisions shouldn't be made based on wrong
understandings of how the language works.
As to the idea of turning a guaranteed run time error into a compile time
error I'm usually in favor of that. If it doesn't muck up the compiler.
On Jun 13, 2010 5:54 AM, "Demur Rumed" <junkmute(a)hotmail.com> wrote:
> > But if the programmer intended a to be global, the *only*
> > reason it's a bug is the current s...
Some like to think of = as a form of augmented assignment
Currently, = doesn't align with other augmenteds on this point
That doesn't seem very consistent. Add on that augmented
assignment is the only globalizing store statement which also
dereferences, and consistency doesn't seem to be a strong point
against this proposal
Jeux Messenger : mettez vos amis au défi! Jeux
Python-ideas mailing list
OK... after a bit of off-list discussion I realize what I am really
concerned about with respect to the standard library wasn't well expressed.
So here's my real assertion:
There is no reason any new library or functionality should be tied to a
Outside of a few exceptions (like ast or importlib) functionality in the
standard library seldom relies on anything in a particular Python release;
e.g., code might use conditional expressions, but it never *has* to use
conditional expressions. The standard library that most people know and
love is really the least common denominator of Python's that person has to
handle; for someone writing an open source library that's probably 2.5, for
someone using Zope 2 that's traditionally been 2.4, and if you have a
controlled environment (e.g., internal development) maybe you can do 2.6.
I think there is a general consensus that functionality should not be tied
to a Python release, but the results are ad hoc. That is, truly useful
libraries that are added to the stdlib are backported, or more often were
originally maintained as a library with backward compatibility before being
integrated into the standard library. I think we should have a more
formalized process about how this functionality is maintained, including a
process that considers the years of ongoing maintenance and improvement that
should happen on these libraries. (Most specifically without serious
thought about this development process I am pessimistic about an orderly or
positive inclusion of distutils2 in packaging workflows.)
Another alternative is to simply not make improvements to the standard
library beyond a very well-defined set of appropriate functionality. This
would be much closer to the status quo. Defining what categories would be
"appropriate" would be contentious, I am sure, but would sharply focus
Ian Bicking | http://blog.ianbicking.org
Threading will probably break here as I wasn't on the list for the first
My concern with the standard library is that there's a couple things going
1. The standard library represents "accepted" functionality, kind of best
practice, kind of just conventional. Everyone (roughly) knows what you are
talking about when you use things from the standard library.
2. The standard library has some firm backward compatibility guarantees. It
also has some firm stability guarantees, especially within releases (though
in practice, nearly for eternity).
3. The standard library is kind of collectively owned; it's not up to the
whims of one person, and can't be abandoned.
4. The standard library is one big chunk of functionality, upgraded all
under one version number, and specifically works together (though in
practice cross-module refactorings are uncommon).
There's positive things about these features, but 4 really drives me nuts,
and I think is a strong disincentive to putting stuff into the standard
library. For packaging I think 4 actively damages maintainability.
Packaging is at the intersection of several systems:
* Python versions
* Forward and backward compatibility with distributed libraries
* System policies (e.g., Debian has changed things around a lot in the last
* A whole other ecosystem of libraries outside of Python (e.g., binding to C
* Various developer toolkits, some Python specific (e.g., Cython) some not
I don't think it's practical to think that we can determine some scope of
packaging where it will be stable in the long term, all these things are
changing and many are changing without any particular concern for how it
affects Python (i.e., packaging must be reactive). And frankly we clearly
do not have packaging figured out, we're still circling in on something...
and I think the circling will be more like a Strange Attractor than a sink
The issues exist for other libraries that aren't packaging-related, of
course, it's just worse for packaging. argparse for instance is not
"done"... it has bugs that won't be fixed before release, and functionality
that it should reasonably include. But there's no path for it to get
better. Will it have new and better features in Python 3.3? Who seriously
wants to write code that is only compatible with Python 3.3+ just because of
some feature in argparse? Instead everyone will work around argparse as it
currently exists. In the process they'll probably use undocumented APIs,
further calcifying the library and making future improvements disruptive.
It's not very specific to argparse, I think ElementTree has similar issues.
The json library is fairly unique in that it has a scope that can be
"done". I don't know what to say about wsgiref... it's completely
irrelevant in Python 3 because it was upgraded along the Python schedule
despite being unready to be released (this is relatively harmless as I don't
think anyone is using wsgiref in Python 3).
So, this is the tension I see. I think aspects of the standard library
process and its guarantees are useful, but the current process means
releasing code that isn't ready or not releasing code that should be
released, and neither is good practice and both compromise those
guarantees. Lots of moving versions can indeed be difficult to manage...
though it can be made a lot easier with good practices. Though even then
distutils2 (and pip) does not even fit into that... they both enter into the
workflow before you start working with libraries and versions, making them
somewhat unique (though also giving them some more flexibility as they are
not so strongly tied to the Python runtime, which is where stability
requirements are most needed).
Ian Bicking | http://blog.ianbicking.org
I'm wondering if there is any downside in making properties callable:
def __call__(self, obj):
>>> foo = Foo()
>>> foo.baz is Foo.baz(foo)
>>> foo.bar is Foo.bar(foo)
TypeError: 'property' object is not callable
As for the motivation, having callable properties would make it easier
to stack them with other decorators that typically expect callables.
Am I missing something ?
It would be nice if setattr() was extended to allow usage as a decorator:
Here's a pure Python implementation:
_setattr = setattr
def setattr(obj, *args):
if len(args) >= 2:
return _setattr(obj, *args)
return lambda f: _setattr(obj, args if args else f.__name__, f) or f
I've been working a lot with date/time variables lately, and maybe it's just
maybe I just don't get it yet, but it sure doesn't feel like there's
elegant, obvious way to work with them. Feels like working with strings in
So I was thinking, have date and time literals ever been seriously
in these discussion lists? (If so, I apologize in advance for reviving this
Is there any chance we could ever see date/time literals in Python? Perhaps
as a top-level
abstraction itself, or as a subclass of numbers.Number, or even as common
float or int
(by adding attributes regarding specific properties of a number's date/time
e.g. "<2010-12-31>.incr_month(2) == <2011-02-28>" -- just an example though,
i don't really
think this exact method would be practical).
If we adopt a standard like ISO 8601, then we automatically get
an unambiguous one-to-one
relationship between (date+time) and (amount of time elapsed from <epoch>),
over a continuous,
infinite domain (towards +inf and -inf). Very often the "time elapsed" value
is what we really
want (or at least the absolute difference between two values, which is of
course easier to find
in this form), and besides being more compact and flexible, no functionality
I guess the most controversial point here would be the meaning of arithmetic
between dates (or the very validity of such operations).
Another (not so obvious) problem is leap seconds (btw, who the hell invented
It's a weird and complicated problem, but I think it's not impossible to
turn this "complicated"
into just "complex", keeping the tricky concepts buried inside the
Anyway, here's a couple of suggestions for a syntax:
2010.05.29 + 20.06.17
2010.05.29d + 20.06.17t
(just like complex literals)
So, what do you guys think? I'd love to hear others' opinions, even if it's
just for me
to understand what I got wrong and see the obvious way that's already there
PS: Just for fun, is there a standard way I could experiment with creating
new types of literals?
-- Marcos --
In the spirit of collections.OrderedDict and collections.defaultdict, I'd like
to propose collections.identitydict. It would function just like a normal
dictionary, but ignore hash values and comparison operators and merely lookup
keys based on the key's id().
This dict is very useful for keep track of objects that should not be compared
by normal comparison methods. For example, I would use an identitydict to hold
weak references that would otherwise fallback to their referant object's hash.
An advantage of formalizing this in collections would be to enable other Python
implementations like PyPy, where id() is expensive, to provide an optimized
That's not a new idea, but I'd like to throw it here again.
Some modules/packages in the stdlib are pretty isolated, which means
that they could be upgraded with no
harm, independently from the rest. For example the unittest package,
or the email package.
Here's an idea:
1 - add a version number in each package or module of the stdlib that
is potentially upgradable
2 - create standalone releases of these modules/packages at PyPI, in a
restricted area 'stdlib upgrades'
that can be used only by core devs to upload new versions. Each
release lists the precise
Python versions it's compatible with.
3 - once distutils2 is back in the stdlib, provide a command line
interface to list upgradable packages, and make
it possible to upgrade them
4 - an upgraded package lands in a new specific site-packages
directory and is loaded *before* the one in Lib
Tarek Ziadé | http://ziade.org