On Sat, Oct 27, 2012 at 4:40 PM, Mark Shannon <firstname.lastname@example.org>
I suspect that stating and loading the .pyc files is responsible for most of the overhead.
On 27/10/12 20:21, Antoine Pitrou wrote:
On Sat, 27 Oct 2012 09:20:36 -0400
Brett Cannon <email@example.com> wrote:
I did check that markup safe as not installed. It might just be mako doing
The threads tests are very synthetic.
And yes, there are more modules at startup. When was the last to,e we
looked at them to make sure we weren't doing needless I ports?
The last time was between 3.2 and 3.3. It will be hard to lower the
number of imported modules, given the current semantics (io, importlib,
unicode, site.py, sysconfig...). Python 2's view of the world was much
simpler (naïve?) in comparison.
It would be interesting to know *where* the module import time gets
spent, on a lower level. My gut feeling is that execution of Python
module code is the main contributor.
I really doubt that as the amount of stat calls is significantly reduced in Python 3.3 compared to Python 3.2 (startup benchmarks show Python 3.3 is roughly 1.66x faster than 3.2 thanks to caching filenames in a directory). More modules means more work (e.g. I/O, executing the module, etc.).
The only way to lower stat call overhead is to simply not check if a directory's contents changed during startup by assuming Python itself will not write any new module files. Without benchmarking I don't know if it would make that much of a difference, though.
PyRun starts up quite a lot faster thanks to embedding all the modules in the executable: http://www.egenix.com/products/python/PyRun/
Freezing all the core modules into the executable should reduce start up time.
Sure, but working with a frozen module is a pain so it is not something to take lightly.