This then ties into Kenneth's pipfile idea he's working on as it then makes sense to make a wagon/wheelhouse for a lock file. To also tie into the container aspect, if you dev on Windows but deploy to Linux, this can allow for gathering your dependencies locally for Linux on your Windows box and then deploy the set as a unit to your server (something Steve Dower and I have thought about and why we support a lock file concept).And if we use zip files with no nesting then as long as it's only Python code you could use zipimporter on the bundle directly.On Tue, Nov 22, 2016, 22:07 Nick Coghlan, <ncoghlan@gmail.com> wrote:[Some folks are going to get this twice - unfortunately, Google's
mailing list mirrors are fundamentally broken, so replies to them
don't actually go to the original mailing list properly]
(Note for context: I stumbled across Wagon recently, and commented
that we don't currently have a good target-environment-independent way
of bundling up a set of wheels as a single transferable unit)
On 23 November 2016 at 03:44, Nir Cohen <nir36g@gmail.com> wrote:
> We came up with a tool (http://github.com/cloudify-cosmo/wagon) to do just
> that and that's what we currently use to create and install our plugins.
> While wheel solves the problem of generating wheels, there is no single,
> standard method for taking an entire set of dependencies packaged in a
> single location and installing them in a different location.
Where I see this being potentially valuable is in terms of having a
common "multiwheel" transfer format that can be used for cases where
the goal is essentially wheelhouse caching and transfer. The two main
cases I'm aware of where this comes up:
- offline installation support (i.e. the Cloudify plugins use case,
where the installation environment doesn't have networked access to an
index server)
- saving and restoring the wheelhouse cache (e.g. this comes up in
container build pipelines)
The latter problem arises from an issue with the way some container
build environments (most notable Docker's) currently work: they always
run in a clean environment, which means they can't see the host's
wheel cache. One of the solutions to this is to let container builds
specify a "cache state" which is archived by the build management
service at the end of the build process, and then restored when
starting the next incremental image build.
This kind of cache transfer is already *possible* today, but having a
standardised way of doing it makes it easier for people to write
general purpose tooling around the concept, without requiring that the
tool used to create the archive be the same tool used to unpack it at
install time.
Cheers,
Nick.
--
Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia_______________________________________________
Distutils-SIG maillist - Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig