[Distutils] zc.buildout & Docker container images

Michael Merickel michael at merickel.org
Thu Jul 3 17:36:54 CEST 2014


I don't normally like shameless plugs but I haven't seen many people use
docker in the following way yet and I happen to be using it with
zc.buildout.

I have built a small tool called marina[1] that I'm using to build binary
slugs that can be installed in production. It uses throwaway (clean-room)
containers to compile the application (using buildout or whatever else),
and then dumps out an archive (or tags a new docker image) of the built
assets which can be deployed. Marina is not using dockerfile's explicitly
but rather you make a parent image (with buildpacks or whatever else)
however you like. The image is then executed with your scripts (such as
buildout) and the assets are archived. The approach keeps ssh keys and
credentials out of the images. A separate data-only container is also
mounted to allow caching of assets (eggs folder?) between runs which speeds
up the build process dramatically. I'm using it on OS X to build binary
tarballs for Ubuntu. The tarballs themselves can then be deployed as a
docker image or just directly on a VM. It's all very alpha but I'm using it
successfully with ansible to distribute tarballs to VMs in production.

[1] https://github.com/mmerickel/marina

Michael


On Thu, Jul 3, 2014 at 9:28 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On 3 July 2014 03:24, Reinout van Rees <reinout at vanrees.org> wrote:
> > On 30-06-14 17:56, Nick Coghlan wrote:
> >>
> >> Yeah, it's the "you still need a way to define what goes into the image"
> >> part that intrigues me with respect to combining tools like zc.buildout
> >> with Docker.
> >
> >
> > Buildout, to me, solves all there is to solve regarding python packages
> and
> > a bit of configuration. Including calling bower to go grab the necessary
> > css/js :-)
> >
> > That css/js is quite an important part of "what goes into the image".
> Bower
> > with it's dependency mechanism solves that (and it can be called from
> > buildout).
> >
> > A third important one is system packages: "what do I apt-get install".
> >
> > **Question/idea**: what about some mechanism to get this apt-get
> information
> > out of a python package? If a site or package absolutely requires gdal or
> > redis or memcache, it feels natural to me to have that knowledge
> somewhere
> > in the python package.
> >
> > Does anyone do something like this? I was thinking along the lines of a
> > simple 'debian_dependencies.txt' that I could use as input for
> > ansible/fabric/fpm/whatever.
> >
> > Looking for ideas :-)
>
> Allowing external dependency info to be captured in the upstream
> Python packages is one of the goals behind the metadata extension
> system for metadata 2.0.
>
> The current draft of the extension system is at
> http://www.python.org/dev/peps/pep-0426/#metadata-extensions
>
> A preliminary set of "standard extensions" is at
> http://www.python.org/dev/peps/pep-0459/
>
> The idea is that the "core metadata" focuses on what is needed to
> support the essential dependency resolution process, while extensions
> represent optional extras. The "installer_must_handle" field is an
> escape clause allowing a distribution to say "if you don't handle this
> extension, you can't install this package properly, so fail rather
> than silently doing the wrong thing". That approach will allow us to
> add post-install hooks later as an extension, and have earlier
> versions of tools fall back to installing from source rather than
> silently failing to run the post-install hooks.
>
> Cheers,
> Nick.
> _______________________________________________
> Distutils-SIG maillist  -  Distutils-SIG at python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/distutils-sig/attachments/20140703/17d0de19/attachment-0001.html>


More information about the Distutils-SIG mailing list