[Python-Dev] Re: [Import-sig] Re: Proposal for a modified import mechanism.
Prabhu Ramachandran
Prabhu Ramachandran <prabhu@cyberwaveindia.com>
Sun, 11 Nov 2001 13:12:03 +0530
Hi,
>>>>> "ej" == "eric" <ej@ee.duke.edu> writes:
>> Currently, os.py in a package masks the real one from anywhere
>> inside the package. This would extend that to anywhere inside
>> any nested subpackage. Whether that's a "neat" or a "dirty"
>> trick is pretty subjective. The wider the namespace you can
>> trample on, the more it tends to be "dirty".
ej> Yeah, I guess I come down on the "neat" side in this one. If
ej> I have a module or package called 'common' at the top level of
ej> a deep hierarchy, I'd like all sub-packages to inherit it.
ej> That seems intuitive to me and inline with the concept of a
ej> 'package'. Perhaps the hijacking of the Numeric example
ej> strikes a nerve, but inheriting the 'common' module shouldn't
ej> be so contentious. Also, if someone has the gall to hijack
ej> os.py at the top of your package directory structure, it seems
ej> very likely you want this new behavior everywhere within your
ej> package.
I agree on this. Also each package is kind of isolated. Any module
like os.py inside a sub package won't affect _every_ other sub package
and will only affect packages that are nested inside this particular
package. So there is some kind of safety net and its not like
sticking everything inside sys.path. :)
Also, right now, what prevents someone from sticking an os.py
somewhere in sys.path and completely ruining standard behaviour. So,
its not asif this new approach to importing package makes things
dirty, you can very well do 'bad' things right now.
[snip]
ej> As for overhead, I thought I'd get a couple more data points
ej> from distutils and xml since they are standard packages. The
ej> distutils import is pretty much a 0% hit. However, the xml
ej> import is *much* slower -- a factor of 3.5. Thats a huge hit
ej> and worth complaining about. I don't know if this can be
ej> optimized or not. If not, it may be a show stopper, even if
ej> the philosophical argument was uncontested.
>>>> import my_import import time t1 = time.time();import
>>>> xml.sax.saxutils; t2 = time.time();print t2-t1
1.35199999809
>>>> import time t1 = time.time();import xml.sax.saxutils; t2 =
>>>> time.time();print t2-t1
0.381000041962
IMHO, this is an unfair/wrong comparison.
(0) I suspect that you did not first clean things up by doing a
plain import xml.sax.saxutils a few times and then start
testing.
(1) import itself is implemented in C. my_import is pretty much
completely in Python.
Here is a fairer comparison (done after a few imports).
>>> import time
>>> s = time.time (); import xml.sax.saxutils; print time.time()-s
0.0434629917145
>>> import my_import
>>> import time
>>> s = time.time (); import xml.sax.saxutils; print time.time()-s
0.0503059625626
Which is still not bad at all and nothing close to 350% slowdown. But
to see if the presently measured slowdown is not the parent lookup we
really need to compare things against the modified (to cache failures)
knee.py:
>>> import knee
>>> import time
>>> s = time.time (); import xml.sax.saxutils; print time.time()-s
0.0477709770203
>>> import my_import
>>> import time
>>> s = time.time (); import xml.sax.saxutils; print time.time()-s
0.0501489639282
Which is really not very bad since its just a 5% slowdown.
Here are more tests for scipy:
>>> import time
>>> s = time.time (); import scipy; print time.time()-s
1.36110007763
>>> import knee, time
>>> s = time.time (); import scipy; print time.time()-s
1.48176395893
>>> import my_import, time
>>> s = time.time (); import scipy; print time.time()-s
1.5150359869
Which means that doing the parent lookup stuff in this case is really
not so bad and the biggest slowdown is mostly thanks to knee being
implemented in Python. And there is no question of a 350% slowdown!!
:)
prabhu