On Mon, May 29, 2017 at 7:26 AM, Donald Stufft donald@stufft.io wrote: >
On May 29, 2017, at 3:05 AM, Nathaniel Smith njs@pobox.com wrote:
I think there's some pip bug somewhere discussing this, where Ralf Gommers and I point out that this is a complete showstopper for projects with complex and expensive builds (like scipy). If 'pip install .' is going to replace 'setup.py install', then it needs to support incremental builds, and the way setup.py-and-almost-every-other-build-tool do this currently is by reusing the working directory across builds.
Wouldn’t supporting incremental builds the way ccache does work just fine? Have a per build tool cache directory somewhere that stores cached build output for each individual file keyed off a hash or something? (For that matter, if someone wants incremental rebuilds, couldn’t they just use ccache as their CC?).
With a random numpy checkout on my laptop and a fully-primed ccache, some wall-clock timings:
no-op incremental build (python setup.py build): 1.186 seconds
python setup.py sdist: 3.213 seconds unpack resulting tarball: 0.136 seconds python setup.py build in unpacked tree: 7.696 seconds
So ccache makes the sdist-and-build a mere 10x slower than an in-place incremental build.
ccache is great, but it isn't magic. It can't make copying files faster (notice we're already 3x slower before we even start building!), it doesn't speed up linking, and you still need to spawn all those processes and hash all that source code instead of just making some stat() calls.
Also, this is on Linux. The numbers would look much worse on Windows, given that it generally has much higher overhead for unpacking tarballs and spawning lots of processes, and also given that ccache doesn't support MSVC!
Also also, notice elsewhere in the thread where Thomas notes that flit can't build an sdist from an unpacked sdist. It seems like 'pip install unpacked-sdist/' is an important use case to support...
-n
-- Nathaniel J. Smith -- https://vorpus.org