<br><br><div><span class="gmail_quote">On 11/4/06, <b class="gmail_sendername">Jean-Paul Calderone</b> <<a href="mailto:exarkun@divmod.com">exarkun@divmod.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Sun, 05 Nov 2006 14:21:34 +1300, Greg Ewing <<a href="mailto:greg.ewing@canterbury.ac.nz">greg.ewing@canterbury.ac.nz</a>> wrote:<br>>Fredrik Lundh wrote:<br>><br>>> well, from a performance perspective, it would be nice if Python looked
<br>>> for *fewer* things, not more things.<br>><br>>Instead of searching for things by doing a stat call<br>>for each possible file name, would it perhaps be<br>>faster to read the contents of all the directories
<br>>along sys.path into memory and then go searching<br>>through that?<br><br>Bad for large directories. There's a cross-over at some number<br>of entries. Maybe Python should have a runtime-tuned heuristic<br>for selecting a filesystem traversal mechanism.
</blockquote><div><br>Hopefully my import rewrite is flexible enough that people will be able to plug in their own importer/loader for the filesystem so that they can tune how things like this are handled (e.g., caching what files are in a directory, skipping bytecode files, etc.).
<br><br>-Brett<br></div><br></div>