On 28 Oct 2013 08:41, "PJ Eby" <pje@telecommunity.com> wrote:
>
> On Sun, Oct 27, 2013 at 4:59 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
> >
> > On 28 Oct 2013 02:37, "PJ Eby" <pje@telecommunity.com> wrote:
> >>
> >> On Sun, Oct 27, 2013 at 1:03 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
> >> > Now, regarding the signature of exec_module(): I'm back to believing
> >> > that loaders should receive a clear indication that a reload is taking
> >> > place. Legacy loaders have to figure that out for themselves (by
> >> > seeing that the module already exists in sys.modules), but we can do
> >> > better for the new API by making the exec_module signature look like:
> >> >
> >> >     def exec_module(self, module, previous_spec=None):
> >> >         # module is as per the current PEP 451 text
> >> >         # previous_spec would be set *only* in the reload() case
> >> >         # loaders that don't care still need to accept it, but can
> >> > just ignore it
> >>
> >> Just to be clear, this means that a lazy import implementation that
> >> creates a module object without a __spec__ in the first place will
> >> look like an initial import?  Or will that crash importlib because of
> >> a missing __spec__ attribute?
> >>
> >> That is, is reload()'s contract adding a new prerequisite for the
> >> object passed to it?
> >>
> >> (The specific use case is creating a ModuleType subclass instance for
> >> lazy importing upon attribute access.  Pre-importlib, all that was
> >> needed was a working __name__ attribute on the module.)
> >
> > For custom loaders, that's part of the contract for create_module() (since
> > you'll get an ordinary module otherwise),
>
> Huh?  I don't understand where custom loaders come into it.  For that
> matter, I don't understand what "get an ordinary module object" means
> here, either.
>
> I'm talking about userspace code that implements lazy importing
> features, like the lazyModule() function in this module:
>
>    http://svn.eby-sarna.com/Importing/peak/util/imports.py?view=markup
>
> Specifically, I'm trying to get an idea of how much that code will
> need to change under the PEP (and apparently under importlib in
> general).

If the lazy import is injected by a *different* module, then nothing changes.

> > and so long as *setting* the
> > special module attributes doesn't cause the module to be imported during the
> > initial load operation, attribute access based lazy loading will work fine
> > (and you don't even have to set __name__, since the import machinery will
> > take care of that).
>
> There's no "initial load operation", just creation of a dummy module
> and stuffing it into sys.modules.  The way it works is that in, say,
> foo/__init__.py, one uses:
>
>      bar = lazyModule('foo.bar')
>      baz = lazyModule('foo.baz')
>
> Then anybody importing 'foo.bar' or 'foo.baz'  (or using "from foo
> import bar", etc.) ends up with the lazy module.  That is, it's for
> lazily exposing APIs, not something used as an import hook.

I was thinking of the more complex case where a module causes *itself* to be loaded lazily. Nothing changes for the simpler case where the injection occurs in a different module.

> > For module level lazy loading that injects a partially initialised module
> > object into sys.modules rather than using a custom loader or setting a
> > __spec__ attribute, yes, the exec_module invocation on reloading would
> > always look like a fresh load operation (aside from the fact that the custom
> > instance would already be in sys.modules from the first load operation).
>
> Right.
>
>
> > It *will* still work, though (at least, it won't break any worse than such code
> > does today, since injecting a replacement into sys.modules really isn't
> > reload friendly in the first place).
>
> Wait, what?  Who's injecting a replacement into sys.modules?  A
> replacement of what?  Or do you mean that loaders aren't supposed to
> create new modules, but use the one in sys.modules?
>
> Honestly, I'm finding all this stuff *really* confusing, which is kind
> of worrying.  I mean, I gather I'm one of the handful of people who
> really understood how importing *used to work*, and I'm having a lot
> of trouble wrapping my brain around the new world.

Provide test cases that exercise the situations you're concerned about supporting and then you don't need to worry about them breaking.

>
> (Granted, I think that may be because I understand how a lot of old
> corner cases work, but what's bugging me is that I no longer
> understand how those old corners work under the new regime, nor do I
> feel I understand what the *new* corners will be.  This may also just
> be communication problems, and the fact that it's been months since I
> really walked through importlib line by line, and have never really
> walked through it (or PEP 451) quite as thoroughly as I have import.c.
>  I also seem to be having trouble grokking why the motivating use
> cases for PEP 451 can't be solved by just providing people with good
> base classes to use for writing loaders -- i.e., I don't get why the
> core protocol has to change to address the use case of writing loaders
> more easily.  The new protocol seems way more complex than PEP 302,
> and ISTM the complexity could just be pushed off to the loader side of
> the protocol without creating more interdependency between importlib
> and the loaders.)

Bad loader implementations have too much power to break the import system, and loaders don't currently expose enough information about modules that *could* be imported.

We also take "inherit from this class to implement the protocol correctly, import will break if you don't" as a sign that the *protocol is broken* by pushing too much of the work onto the plugins.

New style loaders can be much simpler, because the import system takes care of the complexity, without relying on users inheriting from a particular base class.

The complexity comes from the fact that we're now breaking down what the *real* expectations on a loader actually are, and making those part of the import system itself.

Cheers,
Nick.