On Mon, Jan 10, 2011 at 3:56 AM, Ron Adam firstname.lastname@example.org wrote:
On 01/09/2011 12:39 AM, Nick Coghlan wrote:
Also consider having virtual modules, where objects in it may have come from different *other* locations. A virtual module would need a way to keep track of that. (I'm not sure this is a good idea.)
It's too late, code already does that. This is precisely the use case I am trying to fix (objects like functools.partial that deliberately lie in their __module__ attribute), so that this can be done *right* (i.e. without having to choose which use cases to support and which ones to break).
Yes, __builtins__ is a virtual module.
No, it's a real module, just like all the others.
Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'new'
And it's not in sys.modules yet. That's ok, other things can be loaded into it before it's added it to sys.modules.
It's this loading part that can be improved.
I don't understand the point of this tangent. The practice of how objects are merged into modules is already established: you use "import *" or some other form of import statement. I want to *make that work properly*, not invent a new way to do it.
That basic problem is that __module__ currently tries to serve two masters:
- use cases like inspect.getsource, where we want to know where the
object came from in the current interpreter 2. use cases like pickle, where we want the "official" portable location, with any implementation details (like the _functools module) hidden.
Most C extensions are written as modules, to be imported and imported from. A tool to load objects rather than import them, may be better in these situations.
partial = imp.load_extern_object("_functools.partial")
A loaded object would have it's __module__ attribute set to the module it's loaded into instead of where it came from.
By doing it this way, it doesn't complicate the import semantics.
What complication to the import semantics? I'm not touching the import semantics, just the semantics for defining functions and classes.
It may also be useful to make it a special type, so that other tools can decide how to handle them.
No. The idea is to make existing code work properly, not force people to jump through new hoops.
Currently, the default behaviour of the interpreter is to support use case 1 and break use case 2 if any objects are defined in a different module from where they claim to live (e.g. see the pickle compatibility breakage with the 3.2 unittest implementation layout changes). The only tool currently available to module authors is to override __module__ (as functools.partial and the datetime acceleration module do), which is correct for use case 2, but breaks use case 1 (leading to misleading error messages in the C acceleration module case, and breaking otherwise valid introspection in the unittest case).
My proposed changes will: a) make overriding __module__ significantly easier to do b) allow the introspection use cases access to the information they need so they can do the right thing when confronted with an overridden __module__ attribute
It would be better to find solutions that don't override __module__ after it has been imported or loaded.
Again, no. My aim is to make existing practices not break things, rather than trying to get people to change their practices.
Most people will never need to care or worry about the difference between __module__ and __impl_module__ either - it will be hidden inside libraries like inspect, pydoc and pickle.
I think __impl_module__ should only be on objects where it would be different than __module__.
How does introducing an inconsistency like that make anything simpler? Optional attributes are painful to deal with, so we only use them for things where we don't fully control their creation (e.g. when we add new attributes to modules, PEP 302 means we can't assume they will exist when the module code is running, as third party loaders may not include them when initialising the module namespace). That is unlikely to be the case here.