
Hello, When I write tool modules that export useful names to client code, I usually use __all__ to select proper names. Sure, it's a potential source of identifier conflict. I have another custom __*names__ module attribute that allows the client at least to control which names are defined in the imported module: # module M __Mnames__ = [...] __all__ = ["__Mnames__"] + __M_names__ Then from M import * ; print __Mnames__ outputs needed naming information: from M import * ; print __Mnames__ ; print dir() ==> ['a', 'b', 'c'] ['__Mnames__', '__builtins__', '__doc__', '__file__', '__name__', 'a', 'b', 'c'] [Indeed, you'd have the same info with M.__all__, but I find it strange to have both "from M import *" and "import M" only to access its __all__ attribute. Also, it happens that a module name and it's main defined name are identical, like time.time.] A complication arises when the toolset is structured as a hierarchy of modules. Then I have a top module that (oft) only serves as name interface. Each module exports a "name summary" attribute. For instance, in the case below (where M22 is an internal tool module, not to be exported): M0 M1 M2 M21 M22 the import/export schedule would be: # M21 __M21names__ = [...] __all__ = ["__M21names__"] + __M21names__ # M22 __all__ = [...] # M2 from M21 import * from M22 import * __M2names__ = [...] + __M21names__ __all__ = ["__M2names__"] +__M2names__ # M1 __M1names__ = [...] __all__ = ["__M1names__"] + __M1names__ # M0 from M1 import * from M2 import * __M0names__ = [...] + __M1names__ + __M2names__ __all__ = ["__M0names__"] + __M0names__ Now, when I modify a module in a way that leads to change/delete/add exported names, I only need to care with this update _locally_. The update will automatically climb up the export chain. [Only a module name change remains problematic -- I'm thinking at an automatic module name lookup/update tool.] Well, all of this looks a bit forced and rather unpythonic to me. It works fine, but I'm not satisfied. I wonder whether I'm overlooking something obvious. And if yes, what? Or conversely, do you think there could/should be a nicer (and builtin) way of doing such things? Denis ------ la vita e estrany

Sorry, I didn't intend to send this post now -- was rather a rough version waiting for further research -- but I hit the wrong button. Anyway, now it's published... Le Fri, 3 Apr 2009 10:19:15 +0200, spir <denis.spir@free.fr> s'exprima ainsi:
------ la vita e estrany

On Fri, Apr 03, 2009, spir wrote:
Your problem is that you're using import * -- stop doing that and you won't have an issue. The only good use cases for import * IMO are interactive Python and packages, and in the latter case I don't see why anyone would need the information you propose except for debugging purposes. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan

Aahz wrote:
We've recently discovered another use for it: overriding a pure Python implementation with optional native language accelerated components. Using "from _accelerated_name import *" allows other Python implementations to easily choose a different subset of functions and classes to accelerate. That one is only relevant to people writing modules that they would like to work unchanged with more than one Python implementation though. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia ---------------------------------------------------------------

Dear Python idea-lists, I've noticed that subgenerators, coroutines and such stuff are heavily discussed and are on topic here right now. What I'd like to ask is about the mood of re-considering the deferred PEP 323 about copyable iterators and related ideas. I've done some work a while ago about copyable and pickable generators in pure Python that works in some limitations but basically relies on a fierce and almost impenetrable ( although documented ) bytecode hack. It is bundled in a package called generator_tools. The design is mostly "test-driven" which means that I wouldn't dare in my life to try a more formal specification of the used heuristics and special case treatments. http://pypi.python.org/pypi/generator_tools/0.3.6 I know that this packages has users and there were also quite a few contributors who provided bug reports and fixes. So it isn't an academic exercise. I would love to see this package being abandoned in the future and replaced by a proper, more complete and faster CPython implementation. I know I'm not really in a position of calling for volunteers in particular because I'm not a core developer and don't want to hack the CPython C code base. But maybe this would be a cool idea for a summer-of-code project or something alike. I know that Stackless Python has a working implemenation so it shall be at least feasible. Regards

Sorry, I didn't intend to send this post now -- was rather a rough version waiting for further research -- but I hit the wrong button. Anyway, now it's published... Le Fri, 3 Apr 2009 10:19:15 +0200, spir <denis.spir@free.fr> s'exprima ainsi:
------ la vita e estrany

On Fri, Apr 03, 2009, spir wrote:
Your problem is that you're using import * -- stop doing that and you won't have an issue. The only good use cases for import * IMO are interactive Python and packages, and in the latter case I don't see why anyone would need the information you propose except for debugging purposes. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan

Aahz wrote:
We've recently discovered another use for it: overriding a pure Python implementation with optional native language accelerated components. Using "from _accelerated_name import *" allows other Python implementations to easily choose a different subset of functions and classes to accelerate. That one is only relevant to people writing modules that they would like to work unchanged with more than one Python implementation though. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia ---------------------------------------------------------------

Dear Python idea-lists, I've noticed that subgenerators, coroutines and such stuff are heavily discussed and are on topic here right now. What I'd like to ask is about the mood of re-considering the deferred PEP 323 about copyable iterators and related ideas. I've done some work a while ago about copyable and pickable generators in pure Python that works in some limitations but basically relies on a fierce and almost impenetrable ( although documented ) bytecode hack. It is bundled in a package called generator_tools. The design is mostly "test-driven" which means that I wouldn't dare in my life to try a more formal specification of the used heuristics and special case treatments. http://pypi.python.org/pypi/generator_tools/0.3.6 I know that this packages has users and there were also quite a few contributors who provided bug reports and fixes. So it isn't an academic exercise. I would love to see this package being abandoned in the future and replaced by a proper, more complete and faster CPython implementation. I know I'm not really in a position of calling for volunteers in particular because I'm not a core developer and don't want to hack the CPython C code base. But maybe this would be a cool idea for a summer-of-code project or something alike. I know that Stackless Python has a working implemenation so it shall be at least feasible. Regards
participants (5)
-
Aahz
-
Kay Schluehr
-
Nick Coghlan
-
spir
-
Terry Reedy