I'm new; greetings all!
I'm not sure if this is a bug or feature, but it confused me so I thought
I'd raise the issue.
def b (self):
foo = ('Hello', b)
t = type (self. __class__. foo )
t = type (self. __class__. b)
e = c ()
prints <type 'function'> for the first print,
and it seems to me it should be an instancemethod
I'm trying to something like this
additonal_buttons = ()
def __init__ (self):
buts = 
for x in addional_butons:
if isinstance (x , types. UnboundMethodType): # fails
because type (x ) is function, not UnboundMethod
buts. append ((x , types. MethodType (x , self)))
buts. append (x)
def EditAsText (self):
additional_buttons = (('EditAsText', EditAsText),)
> I'm new; greetings all!
> I'm not sure if this is a bug or feature, but it confused me so I thought
> I'd raise the issue.
> class a:
> def b (self):
> foo = ('Hello', b)
> class c(a):
> def d(self):
> t = type (self. __class__. foo )
> print t
> t = type (self. __class__. b)
> print t
> e = c ()
> e. d()
> prints <type 'function'> for the first print,
> and it seems to me it should be an instancemethod
> I think what's happening is that it's defining 'b' as a function in the
> class's namespace, storing a reference to that function in the tuple,
> and then, when the class definition ends, it's wrapping the function as
> a method.
> You'll find:
> >>> a.foo
> <function b at 0x00B42C30>
> >>> a.b
> <unbound method a.b>
> >>> a.b.im_func is a.foo
I'm afraid this won't do what I want. As illustrated below, I want to
reference from withing the same class, so as to be able to access in
from a superclass.
(sorry if subclass and superclass is not the correct terminology for Python)
> I'm trying to something like this
> class EditPage:
> additonal_buttons = ()
> def __init__ (self):
> buts = 
> for x in addional_butons:
> if isinstance (x , types. UnboundMethodType): # fails
> because type (x ) is function, not UnboundMethod
> buts. append ((x , types. MethodType (x , self)))
> buts. append (x)
> class TreePage(EditPage):
> def EditAsText (self):
> additional_buttons = (('EditAsText', EditAsText),)
It might be that Python simple doesn't do exactly what I want.
This is really unusual; mostly I think it's great and highly intuitive.
No biggie; I'm sure I can find a workaround!
Please see http://www.python.org/dev/peps/pep-0434/ for the complete
PEP. The idea for the PEP was generated because a right click menu
was added to IDLE in Python 2.7 this started a debate about bug fix VS
enhancement see the PEP references for more information. IDLE has
many patches that already exist but are difficult to get committed.
This PEP is designed to make it easier to bring IDLE up to speed with
modern GUI standards. The PEP is not perfect some issues that need to
-How do we test IDLE patches to make sure they are high quality?
Apparently the build bots don't test user interface issues. My
philosophy is keep it simple so I was thinking that a simple procedure
for testing on each of the major platforms should be performed before
a commit. Creating a build bot that tests user interfaces seems
difficult to me and could put IDLE improvements even more behind.
-Does it make sense to separate IDLE from the stdlib for Python 3.4?
I understand the batteries included argument, but if IDLE development
is going to continue to proceed at a more accelerated pace than
Python, it might make sense to separate it out into its own project or
Please provide comments or other concerns. Thanks.
With the removal of unbound methods in Python 3, introspecting the
class on which a method was defined is not so simple. Python 3.3
gives us __qualname__ for classes and functions (see PEP 3155), which
helps somewhat by giving us a string composed at compile-time.
However, I was considering something a little more powerful.
I propose that what would have formerly been an unbound method get a
new attribute, __origin__. It will be bound to class where the method
One motivator for this proposal is http://bugs.python.org/issue15582,
allowing inspect.getdoc() to "inherit" docstrings. Without a concrete
connection between a method and the "origin" class, dynamically
determining the docstring is basically a non-starter. The __origin__
attribute would help.
The name, __origin__, is a bit generic to accommodate the possible
extension of the proposal to other objects:
* f.__origin__ (the module or function (nested) where the function
* cls.__origin__ (the module, class (nested), or function where the
class was defined)
* code.__origin__ (the function for which the code object was
* module.__origin__ (a.k.a. module.__file__)
For functions and classes defined in a function, __origin__ would be
bound to the function rather than the locals or the code object. For
modules, __origin__ is more accurate in the cases that the module was
generated from something other than a file. It would still be a
string, though. Also, we currently have a convention of setting
__module__ to the name of the module rather than the module itself.
Whether to break with that convention for __origin__ is an open
question which relates to how __origin__ would be used.
Conceivably, each use case for __origin__ could be covered by a name
more specific to the object, e.g. module.__file__. However, having
the one name would help make the purpose clear and consistent.
The downside of binding the objects to __origin__ is in memory usage
and, particularly, in ref-counts/reference-cycles. I expect that this
is where the biggest objections will lie. However, my understanding
is that this is less of an issue now than it used to be. If need be,
weakref proxies could address some of the concerns.
The status quo has a gap in the case of "unbound" methods and of code
objects, though __qualname__ does offer an improvement for methods.
The actual use case I have relates strictly to those methods. Having
something like __origin__ for those methods, at least, would be
An exchange with Antoine in one of the enum threads sparked a thought.
A recurring suggestion for collections.namedtuple is that it would be
nice to be able to define them like this (as it not only avoids having
to repeat the class name, but also allows them to play nicely with
pickle and other name-based reference mechanisms):
__fields__ = "a b c d e".split()
However, one of Raymond's long standing objections to such a design
for namedtuple is the ugliness of people having to remember to include
the right __slots__ definition to ensure it doesn't add any storage
overhead above and beyond that for the underlying tuple.
For the intended use case as a replacement for short tuples, an unused
dict per instance is a *big* wasted overhead, so that concern can't be
dismissed as premature optimisation:
>>> import sys
>>> class Slots(tuple): __slots__ = ()
>>> class InstanceDict(tuple): pass
>>> sys.getsizeof(tuple([1, 2, 3]))
>>> x = Slots([1, 2, 3])
>>> y = InstanceDict([1, 2, 3])
>>> sys.getsizeof(y) # All good, right?
>>> sys.getsizeof(y.__dict__) # Yeah, not so much...
However, the thought that occurred to me is that the right metaclass
definition allows the default behaviour of __slots__ to be flipped, so
that you get "__slots__ = ()" defined in your class namespace
automatically, and you have to write "del __slots__" to get normal
class behaviour back:
>>> class SlotsMeta(type):
... def __prepare__(cls, *args, **kwds):
... return dict(__slots__=())
>>> class SlotsByDefault(metaclass = SlotsMeta): pass
>>> class Slots(tuple, SlotsByDefault): pass
>>> class InstanceDict(tuple, SlotsByDefault): del __slots__
>>> sys.getsizeof(Slots([1, 2, 3]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Slots' object has no attribute '__dict__'
>>> sys.getsizeof(InstanceDict([1, 2, 3]))
>>> sys.getsizeof(InstanceDict([1, 2, 3]).__dict__)
So, what do people think? Too much magic? Or just the right amount to
allow a cleaner syntax for named tuple definitions, without
inadvertently encouraging people to do bad things to their memory
usage? (Note: for backwards compatibility reasons, we couldn't use a
custom metaclass for the classes returned by the existing collections
namedtuple API. However, we could certainly provide a distinct
collections.NamedTuple type which used a custom metaclass to behave
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
After all the defenses I still don't like Tim's proposed syntax. Color me Barry.
On Tue, Feb 12, 2013 at 11:54 AM, Tim Delaney <tim.delaney(a)aptare.com> wrote:
> On 13 February 2013 06:13, Guido van Rossum <guido(a)python.org> wrote:
>> So with Tim's implementation, what happens here:
>> class Color(Enum):
>> RED, GREEN, BLUE
>> if sys.platform == 'win32':
>> and you happen to have no "import sys" in your module? The sys lookup
>> will succeed, create a new enum, and then you will get an error
>> something like this:
>> AttributeError: 'Enum' object has no attribute 'platform'
> Well, that particular case would work (you'd get a NameError: sys) due to
> having done an attribute lookup on sys, but the following:
> class Color(Enum):
> RED, GREEN, BLUE
> if platfor == 'win32':
> would create an enum value 'platfor'.
> Personally, I don't think it's possible to have an enum with a simple
> interface and powerful semantics with just python code without it being
> fragile. I think I've got it fairly close, but there is horrible magic in
> there (multiple kinds) and there are definitely still edge cases. Any
> complete enum implementation is going to need some special handling by the
> parser I think.
> I'm actually thinking that to simplify things, I need a sentinel object to
> mark the end of the enum list (which allows other names after it). But that
> still wouldn't handle the case above (the if statement).
> BTW, for anyone who hasn't seen the magic code (largely uncommented, no
> explanations yet of how it's doing it - I probably won't get to that until
> the weekend) it's here: https://bitbucket.org/magao/enum
> Tim Delaney
--Guido van Rossum (python.org/~guido)
I realize this isn't going very far, but I would still appreciate
The code is here:
It is based on ideas from Michael Foord, Antoine Pitrou, and
(eventually) Ted Delaney.
Enum only accepts upper-case names as enum candidates; all enum values
must be the same: either 'sequence' (0, 1, 2, etc) or 'flag' (1, 2, 4,
etc.) or 'unique' (north, south, east, west, etc.); and extending via
subclassing is possible.
type = 'sequence'
GREEN = 4
# Color(BLACK:0, RED:1, GREEN:4, BLUE:5)
# MoreColor(BLACK:0, RED:1, GREEN:4, BLUE:5, MAGENTA:6, YELLOW:7,
type = 'unique'
# DbfType(DB3:'db3', CLP:'clp', VFP:'vfp')
type = 'flag'
# SomeFlags(ON_DISK:1, HAS_MEMO:2, LARGE_CHAR:4, UNICODE:8)
type = 'sequence'
The_Other # raises NameError
THOSE = 1 # raises InvalidEnum
-->enum.Color.RED == 1
-->enum.SomeFlags.ON_DISK == 1
-->enum.SomeFlags.ON_DISK == enum.Color.RED
-->enum.MoreColor.RED == 1
-->enum.MoreColor.RED == enum.Color.RED
--> for color in enum.Color:
--> for color in enum.Color:
I would appreciate any comments on both the API, and the code behind it.
This idea is not new - but it is stalled -
Last I remember it came around in Python-devel in 2010, in this thread:
There is an even older PEP (PEP 354) that was rejected just for not
being enough interest at the time -
And it was not dismissed at all - to the contrary the last e-mail in the thread
is a message from the BDLF for it to **be** ! The discussion happened in a bad
moment as Python was mostly freature froozen for 3.2 - and it did not
show up again for Python 3.3;
The reasoning for wanting enums/ constants has been debated already -
but one of the main reasons that emerge from that thread are the ability to have
named constants (just like we have "True" and "False".
why do I think this is needed in the stdlib, and having itin a 3rd
party module is not enough? because they are an interesting thing to have,
not only on the stdlib, but on several widely used Python projects that
don't have other dependencies.
Having a feature like this into the stdlib allow these projects to
make use of it, without needing other dependencies, and moreover,
users which will
benefit the most out of such constants will have a wll known
"constant" type which
won't come as a surprise in each package he is using interactively or debugging.
Most of the discussion on the 2010 thread was summed up in a message by
Michael Foord in this link
with some follow up here:
While analyzing some code I came across the following situation:
- User calls run_until_complete for some Task
- The task itself calls loop.stop() at some point
- run_until_complete raises TimeoutError
Here is a very simple example: https://gist.github.com/saghul/4754117
Something seems a bit off here. While one could argue that stopping the
loop while run_until_complete is ongoing, getting a TimeoutError doesn't
feel right. I think we should detect that the loop was stopped but the
future is not done and raise something like NotCompleted or
Saúl Ibarra Corretgé
http://saghul.net/blog | http://about.me/saghul
I think Tulip should have synchronization primitives by default.
Here is my motivation:
1. it is more convenient to use locking primitive with existing semantics rather than tulip magic.
crawl.py example could use semaphore instead of tasks.wait with timeout 0.
2. while it seems easy to implement semaphore with tulip, it still requires deep understanding of
tulips control flow. i had 2 not very obvious bugs in my semaphore implementation:
a) release() could be called from different co-routines during same scheduling step.
b) bug with acquire and release during same scheduling step.
1. task tries to acquire locked semaphore
2. other task releases semaphore
in result - semaphore gets acquired 2 times at the end of scheduling
step, because actual semaphore acquisition happens in "call_soon" and at this stage
release() call is already released semaphore but first waiter does not acquire it yet.
my implementation: https://codereview.appspot.com/download/issue7230045_15001.diff