[Python-Dev] difficulty of implementing phase 2 of PEP 302 in Python source

Brett Cannon brett at python.org
Thu Sep 28 01:11:33 CEST 2006


On 9/27/06, Phillip J. Eby <pje at telecommunity.com> wrote:
>
> At 02:11 PM 9/27/2006 -0700, Brett Cannon wrote:
> >But it has been suggested here that the import machinery be rewritten in
> >Python.  Now I have never touched the import code since it has always had
> >the reputation of being less than friendly to work with.  I am asking for
> >opinions from people who have worked with the import machinery before if
> >it is so bad that it is worth trying to re-implement the import semantics
> >in pure Python or if in the name of time to just work with the C
> >code.  Basically I will end up breaking up built-in, .py, .pyc, and
> >extension modules into individual importers and then have a chaining
> class
> >to act as a combined .pyc/.py combination importer (this will also make
> >writing out to .pyc files an optional step of the .py import).
>
> The problem you would run into here would be supporting zip imports.


I have not looked at zipimport so I don't know the exact issue in terms of
how it hooks into the import machinery.  But a C level API will most likely
be needed.

  It
> would probably be more useful to have a mapping of file types to "format
> handlers", because then a filesystem importer or zip importer would then
> be
> able to work with any .py/.pyc/.pyo/whatever formats, along with any new
> ones that are invented, without reinventing the wheel.


So you are saying the zipimporter would then pull out of the zip file the
individual file to import and pass that to the format-specific importer?

Thus, whether it's file import, zip import, web import, or whatever, the
> same handlers would be reusable, and when people invent new extensions
> like
> .ptl, .kid, etc., they can just register format handlers instead.


So a sepration of data store from data interpretation for importation.  My
only worry is a possible explosion of checks for the various data types.  If
you are using the file data store and had .py, .pyc, .so, module.so, .ptl,
and .kid registered that might suck in terms of performance hit.  And I am
assuming for a web import that it would decide based on the extension of the
resulting web address?  And checking for the various types might not work
well for other data store types.  Guess you would need a way to register
with the data store exactly what types of data interpretation you might want
to check.

Other option is to just have the data store do its magic and somehow know
what kind of data interpretation is needed for the string returned (e.g., a
database data store might implicitly only store .py code and thus know that
it will only return a string of source).  Then that string and the supposed
file extension is passed ot the next step of creating a module from that
data string.

Format handlers could of course be based on the PEP 302 protocol, and
> simply accept a "parent importer" with a get_data() method.  So, let's say
> you have a PyImporter:
>
>      class PyImporter:
>          def __init__(self, parent_importer):
>              self.parent = parent_importer
>
>          def find_module(self, fullname):
>              path = fullname.split('.')[-1]+'.py'
>              try:
>                  source = self.parent.get_data(path)
>              except IOError:
>                  return None
>              else:
>                  return PySourceLoader(source)
>
> See what I mean?  The importers and loaders thus don't have to do direct
> filesystem operations.


I think so.  Basically you want more of a way to stack imports so that the
basic importers are just passed the string of what it is supposed to load
from.  Other importers higher in the chain can handle getting that string.

Of course, to fully support .pyc timestamp checking and writeback, you'd
> need some sort of "stat" or "getmtime" feature on the parent importer, as
> well as perhaps an optional "save_data" method.  These would be extensions
> to PEP 302, but welcome ones.


Could pass the string representing the location of where the string came
from.  That would allow for the required stat calls for .pyc files as needed
without having to implement methods just for this one use case.

Anyway, based on my previous work with pkg_resource, pkgutil, zipimport,
> import.c, etc. I would say this is how I'd want to structure a
> reimplementation of the core system.  And if it were for Py3K, I'd
> probably
> treat sys.path and all the import hooks associated with it as a single
> meta-importer on sys.meta_path -- listed after a meta-importer for
> handling
> frozen and built-in modules.  (I.e., the meta-importer that uses sys.path
> and its path hooks would be last on sys.meta_path.)


Ah, interesting idea!  Could even go as far as removing sys.path and just
making it an attribute of the base importer if you really wanted to make it
just meta_path for imports.

In other words, sys.meta_path is really the only critical import hook from
> the raw interpreter's point of view.  sys.path, however, (along with
> sys.path_hooks and sys.path_importer_cache) is critical from the
> perspective of users, applications, etc., as there has to be some way to
> get things onto Python's path in the first place.
>
>
Yeah, I think I get it.  I don't know how much it simplifies things for
users but I think it might make it easier for alternative import writers.

-Brett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/python-dev/attachments/20060927/0fda24ae/attachment.htm 


More information about the Python-Dev mailing list