Questions about multiple import's.

Alex Martelli aleaxit at
Fri Sep 8 09:24:59 CEST 2000

"Steve Juranich" <sjuranic at> wrote in message
news:Pine.SOL.3.96.1000907171840.21405K-100000 at
> I've read the docs on this, and it's still not entirely clear to me. In
> module, I've imported the re, os, and sys modules.  Now in a parent
> I will also need som funtionality from re, sys, and os.  Does it severely
> hinder performance to re-import the 3 system modules into the parent
> namespace,

No.  As import is something you do once at the start, in a substantial
application you only pay it as a startup cost, so it would not hinder
performance significantly even if it were a higher cost than it is.

> or should I just resolve myself to getting at the functions by
> tacking on an additional "name." construnction (e.g.,

Such multiple-levels references, albeit a minute cost, would be
more likely to be cost incurred in 'hot spots' (nested loops &c)
and so to hinder (not fatally but measurably) your performance.

In fact, one simple optimization that's worth applying to spots
you know to be hot is binding a reference to whatever you're
using in the nested-loop into a LOCAL variable first: e.g., change:

def hotStuff(n,flik):
    for i in range(n):
        for j in range(n):


def hotStuff(n,flik):
    istboae = fee.fie.foe.fum
    for i in range(n):
        for j in range(n):

This is not going to make a HUGE difference, but it will help
a little bit, as you pay the lookup cost once rather than N
squared times.  The Python compiler clearly can't do this kind
of optimization for you -- for all it knows, each ...fum() call
could change, say, the fie field of global variable/module fee,
so it must repeat the lookup each time; this optimization can
be seen as letting Python [and the human reader] know what
you already do, i.e. that the _same_ function is going to be
called n-squared times, so, don't look it up every time...


More information about the Python-list mailing list