As many know there has been an effort to get a generalized interface that a build system can implement so that we can break the hard dependency on setuptools that Python packaging currently has. After a lot of deliberation and threads back and forth (as well as video meetings!) I think that the version of the PEP that is currently on the PR in https://github.com/pypa/interoperability-peps/pull/54 looks like it’s generally the right thing to move forward with. I made a few little comments but overall I think it’s there and we’re ready to move forward on trying to get a reference implementation done that can validate the decisions made in that PEP (and then, hopefully finalize the PEP and merge those implementations).
So many thanks to everyone involved in hammering this out thus far :) I’m nervous but excited about the possibility of making setuptools just another build system.
-----------------
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Hi all,
Here's a first draft of a PEP for the manylinux1 platform tag
mentioned earlier, posted for feedback. Really Robert McGibbon should
get the main credit for this, since he wrote it, and also the docker
image and the amazing auditwheel tool linked below, but he asked me to
do the honors of posting it :-).
BTW, if anyone wants to try this out, there are some test
"manylinux1-compatible" wheels at
https://vorpus.org/~njs/tmp/manylinux-test-wheels/repaired
for PySide (i.e. Qt) and numpy (using openblas). They should be
installable on any ordinary linux system with:
pip install --no-index -f
https://vorpus.org/~njs/tmp/manylinux-test-wheels/repaired $PKG
(Note that this may require a reasonably up-to-date pip -- e.g. the
one in Debian is too old, which confused me for a bit.)
(How they were created: docker run -it quay.io/manylinux/manylinux
bash; install conda because to get builds of Qt and OpenBLAS because I
was too lazy to do it myself; pip wheel PySide / pip wheel numpy;
auditwheel repair <the resulting wheels>, which copies in all the
dependencies to make the wheels self-contained. Just proof-of-concept
for now, but they seem to work.)
----
PEP: XXXX
Title: A Platform Tag for Portable Linux Built Distributions
Version: $Revision$
Last-Modified: $Date$
Author: Robert T. McGibbon <rmcgibbo(a)gmail.com>, Nathaniel J. Smith
<njs(a)pobox.com>
Status: Draft
Type: Process
Content-Type: text/x-rst
Created: 19-Jan-2016
Post-History: 19-Jan-2016
Abstract
========
This PEP proposes the creation of a new platform tag for Python package built
distributions, such as wheels, called ``manylinux1_{x86_64,i386}`` with
external dependencies limited restricted to a standardized subset of
the Linux kernel and core userspace ABI. It proposes that PyPI support
uploading and distributing Wheels with this platform tag, and that ``pip``
support downloading and installing these packages on compatible platforms.
Rationale
=========
Currently, distribution of binary Python extensions for Windows and OS X is
straightforward. Developers and packagers build wheels, which are assigned
platform tags such as ``win32`` or ``macosx_10_6_intel``, and upload these
wheels to PyPI. Users can download and install these wheels using tools such
as ``pip``.
For Linux, the situation is much more delicate. In general, compiled Python
extension modules built on one Linux distribution will not work on other Linux
distributions, or even on the same Linux distribution with different system
libraries installed.
Build tools using PEP 425 platform tags [1]_ do not track information about the
particular Linux distribution or installed system libraries, and instead assign
all wheels the too-vague ``linux_i386`` or ``linux_x86_64`` tags. Because of
this ambiguity, there is no expectation that ``linux``-tagged built
distributions compiled on one machine will work properly on another, and for
this reason, PyPI has not permitted the uploading of wheels for Linux.
It would be ideal if wheel packages could be compiled that would work on *any*
linux system. But, because of the incredible diversity of Linux systems -- from
PCs to Android to embedded systems with custom libcs -- this cannot
be guaranteed in general.
Instead, we define a standard subset of the kernel+core userspace ABI that,
in practice, is compatible enough that packages conforming to this standard
will work on *many* linux systems, including essentially all of the desktop
and server distributions in common use. We know this because there are
companies who have been distributing such widely-portable pre-compiled Python
extension modules for Linux -- e.g. Enthought with Canopy [2]_ and Continuum
Analytics with Anaconda [3]_.
Building on the compability lessons learned from these companies, we thus
define a baseline ``manylinux1`` platform tag for use by binary Python
wheels, and introduce the implementation of preliminary tools to aid in the
construction of these ``manylinux1`` wheels.
Key Causes of Inter-Linux Binary Incompatibility
================================================
To properly define a standard that will guarantee that wheel packages meeting
this specification will operate on *many* linux platforms, it is necessary to
understand the root causes which often prevent portability of pre-compiled
binaries on Linux. The two key causes are dependencies on shared libraries
which are not present on users' systems, and dependencies on particular
versions of certain core libraries like ``glibc``.
External Shared Libraries
-------------------------
Most desktop and server linux distributions come with a system package manager
(examples include ``APT`` on Debian-based systems, ``yum`` on
``RPM``-based systems, and ``pacman`` on Arch linux) that manages, among other
responsibilities, the installation of shared libraries installed to system
directories such as ``/usr/lib``. Most non-trivial Python extensions will depend
on one or more of these shared libraries, and thus function properly only on
systems where the user has the proper libraries (and the proper
versions thereof), either installed using their package manager, or installed
manually by setting certain environment variables such as ``LD_LIBRARY_PATH``
to notify the runtime linker of the location of the depended-upon shared
libraries.
Versioning of Core Shared Libraries
-----------------------------------
Even if author or maintainers of a Python extension module with to use no
external shared libraries, the modules will generally have a dynamic runtime
dependency on the GNU C library, ``glibc``. While it is possible, statically
linking ``glibc`` is usually a bad idea because of bloat, and because certain
important C functions like ``dlopen()`` cannot be called from code that
statically links ``glibc``. A runtime shared library dependency on a
system-provided ``glibc`` is unavoidable in practice.
The maintainers of the GNU C library follow a strict symbol versioning scheme
for backward compatibility. This ensures that binaries compiled against an older
version of ``glibc`` can run on systems that have a newer ``glibc``. The
opposite is generally not true -- binaries compiled on newer Linux
distributions tend to rely upon versioned functions in glibc that are not
available on older systems.
This generally prevents built distributions compiled on the latest Linux
distributions from being portable.
The ``manylinux1`` policy
=========================
For these reasons, to achieve broad portability, Python wheels
* should depend only on an extremely limited set of external shared
libraries; and
* should depend only on ``old`` symbol versions in those external shared
libraries.
The ``manylinux1`` policy thus encompasses a standard for what the
permitted external shared libraries a wheel may depend on, and the maximum
depended-upon symbol versions therein.
The permitted external shared libraries are: ::
libpanelw.so.5
libncursesw.so.5
libgcc_s.so.1
libstdc++.so.6
libm.so.6
libdl.so.2
librt.so.1
libcrypt.so.1
libc.so.6
libnsl.so.1
libutil.so.1
libpthread.so.0
libX11.so.6
libXext.so.6
libXrender.so.1
libICE.so.6
libSM.so.6
libGL.so.1
libgobject-2.0.so.0
libgthread-2.0.so.0
libglib-2.0.so.0
On Debian-based systems, these libraries are provided by the packages ::
libncurses5 libgcc1 libstdc++6 libc6 libx11-6 libxext6
libxrender1 libice6 libsm6 libgl1-mesa-glx libglib2.0-0
On RPM-based systems, these libraries are provided by the packages ::
ncurses libgcc libstdc++ glibc libXext libXrender
libICE libSM mesa-libGL glib2
This list was compiled by checking the external shared library dependencies of
the Canopy [1]_ and Anaconda [2]_ distributions, which both include a wide array
of the most popular Python modules and have been confirmed in practice to work
across a wide swath of Linux systems in the wild.
For dependencies on externally-provided versioned symbols in the above shared
libraries, the following symbol versions are permitted: ::
GLIBC <= 2.5
CXXABI <= 3.4.8
GLIBCXX <= 3.4.9
GCC <= 4.2.0
These symbol versions were determined by inspecting the latest symbol version
provided in the libraries distributed with CentOS 5, a Linux distribution
released in April 2007. In practice, this means that Python wheels which conform
to this policy should function on almost any linux distribution released after
this date.
Compilation and Tooling
=======================
To support the compilation of wheels meeting the ``manylinux1`` standard, we
provide initial drafts of two tools.
The first is a Docker image based on CentOS 5.11, which is recommended as an
easy to use self-contained build box for compiling ``manylinux1`` wheels [4]_.
Compiling on a more recently-released linux distribution will generally
introduce dependencies on too-new versioned symbols. The image comes with a
full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` 4.8.2) as
well as the latest releases of Python and pip.
The second tool is a command line executable called ``auditwheel`` [5]_. First,
it inspects all of the ELF files inside a wheel to check for dependencies on
versioned symbols or external shared libraries, and verifies conformance with
the ``manylinux1`` policy. This includes the ability to add the new platform
tag to conforming wheels.
In addition, ``auditwheel`` has the ability to automatically modify wheels that
depend on external shared libraries by copying those shared libraries from
the system into the wheel itself, and modifying the appropriate RPATH entries
such that these libraries will be picked up at runtime. This accomplishes a
similar result as if the libraries had been statically linked without requiring
changes to the build system.
Neither of these tools are necessary to build wheels which conform with the
``manylinux1`` policy. Similar results can usually be achieved by statically
linking external dependencies and/or using certain inline assembly constructs
to instruct the linker to prefer older symbol versions, however these tricks
can be quite esoteric.
Platform Detection for Installers
=================================
Because the ``manylinux1`` profile is already known to work for the many
thousands of users of popular commercial Python distributions, we suggest that
installation tools like ``pip`` should error on the side of assuming that a
system *is* compatible, unless there is specific reason to think otherwise.
We know of three main sources of potential incompatibility that are likely to
arise in practice:
* A linux distribution that is too old (e.g. RHEL 4)
* A linux distribution that does not use glibc (e.g. Alpine Linux, which is
based on musl libc, or Android)
* Eventually, in the future, there may exist distributions that break
compatibility with this profile
To handle the first two cases, we propose the following simple and reliable
check: ::
def have_glibc_version(major, minimum_minor):
import ctypes
process_namespace = ctypes.CDLL(None)
try:
gnu_get_libc_version = process_namespace.gnu_get_libc_version
except AttributeError:
# We are not linked to glibc.
return False
gnu_get_libc_version.restype = ctypes.c_char_p
version_str = gnu_get_libc_version()
# py2 / py3 compatibility:
if not isinstance(version_str, str):
version_str = version_str.decode("ascii")
version = [int(piece) for piece in version_str.split(".")]
assert len(version) == 2
if major != version[0]:
return False
if minimum_minor > version[1]:
return False
return True
# CentOS 5 uses glibc 2.5.
is_manylinux1_compatible = have_glibc_version(2, 5)
To handle the third case, we propose the creation of a file
``/etc/python/compatibility.cfg`` in ConfigParser format, with sample
contents: ::
[manylinux1]
compatible = true
where the supported values for the ``manylinux1.compatible`` entry are the
same as those supported by the ConfigParser ``getboolean`` method.
The proposed logic for ``pip`` or related tools, then, is:
0) If ``distutils.util.get_platform()`` does not start with the string
``"linux"``, then assume the current system is not ``manylinux1``
compatible.
1) If ``/etc/python/compatibility.conf`` exists and contains a ``manylinux1``
key, then trust that.
2) Otherwise, if ``have_glibc_version(2, 5)`` returns true, then assume the
current system can handle ``manylinux1`` wheels.
3) Otherwise, assume that the current system cannot handle ``manylinux1``
wheels.
Security Implications
=====================
One of the advantages of dependencies on centralized libraries in Linux is
that bugfixes and security updates can be deployed system-wide, and
applications which depend on on these libraries will automatically feel the
effects of these patches when the underlying libraries are updated. This can
be particularly important for security updates in packages communication
across the network or cryptography.
``manylinux1`` wheels distributed through PyPI that bundle security-critical
libraries like OpenSSL will thus assume responsibility for prompt updates in
response disclosed vulnerabilities and patches. This closely parallels the
security implications of the distribution of binary wheels on Windows that,
because the platform lacks a system package manager, generally bundle their
dependencies. In particular, because its lacks a stable ABI, OpenSSL cannot be
included in the ``manylinux1`` profile.
Rejected Alternatives
=====================
One alternative would be to provide separate platform tags for each Linux
distribution (and each version thereof), e.g. ``RHEL6``, ``ubuntu14_10``,
``debian_jessie``, etc. Nothing in this proposal rules out the possibility of
adding such platform tags in the future, or of further extensions to wheel
metadata that would allow wheels to declare dependencies on external
system-installed packages. However, such extensions would require substantially
more work than this proposal, and still might not be appreciated by package
developers who would prefer not to have to maintain multiple build environments
and build multiple wheels in order to cover all the common Linux distributions.
Therefore we consider such proposals to be out-of-scope for this PEP.
References
==========
.. [1] PEP 425 -- Compatibility Tags for Built Distributions
(https://www.python.org/dev/peps/pep-0425/)
.. [2] Enthought Canopy Python Distribution
(https://store.enthought.com/downloads/)
.. [3] Continuum Analytics Anaconda Python Distribution
(https://www.continuum.io/downloads)
.. [4] manylinux1 docker image
(https://quay.io/repository/manylinux/manylinux)
.. [5] auditwheel
(https://pypi.python.org/pypi/auditwheel)
Copyright
=========
This document has been placed into the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:
--
Nathaniel J. Smith -- https://vorpus.org
Here's an interesting bug I just discovered on the old and poorly
maintained VM that I keep for running random junk:
# 64-bit kernel
$ uname -a
Linux roberts 4.1.5-x86_64-linode61 #7 SMP Mon Aug 24 13:46:31 EDT
2015 x86_64 x86_64 x86_64 GNU/Linux
# 32-bit userland
$ file /usr/bin/python2.7
/usr/bin/python2.7: ELF 32-bit LSB executable, Intel 80386, version 1
(SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24,
BuildID[sha1]=0xf481c2b1f8b4328b2f56b642022cc35b4ad91b61, stripped
$ /usr/bin/python2.7
# Yep, definitely 32-bit
>>> import sys; sys.maxint
2147483647
# Uh oh.
>>> import distutils.util; distutils.util.get_platform()
'linux-x86_64'
# Yeah, this is bad
>>> import distlib.wheel; distlib.wheel.COMPATIBLE_TAGS
set([(u'cp21', u'none', u'any'), (u'py21', u'none', u'any'), (u'py23',
u'none', u'any'), (u'cp24', u'none', u'any'), (u'py2', u'none',
u'any'), (u'cp27', u'none', u'any'), (u'cp20', u'none', u'any'),
(u'py22', u'none', u'any'), (u'cp26', u'none', u'any'), (u'cp27',
u'cp27mu', u'linux_x86_64'), (u'py25', u'none', u'any'), (u'cp23',
u'none', u'any'), (u'py24', u'none', u'any'), (u'cp25', u'none',
u'any'), (u'py27', u'none', u'any'), (u'cp22', u'none', u'any'),
(u'cp2', u'none', u'any'), (u'cp27', u'none', u'linux_x86_64'),
(u'py26', u'none', u'any'), (u'py20', u'none', u'any')])
In the past this has never mattered, because there were no linux
wheels on pypi, so even if pip was using the wrong platform tag it
still wouldn't download the wrong thing. But once manylinux1 goes
live, any systems configured like this will start downloading and
installing totally wheels that will crash on import.
The problem seems to be that distutils.util.get_platform() assumes
that the architecture is whatever uname's "machine" field says
(equivalent to uname -m). Unfortunately, this gives the architecture
of the kernel, not of the Python interpreter.
I think the fix is that we should add some check like
if osname == "linux" and machine == "x86_64" and sys.maxsize == 2147483647:
machine = "i686"
to distutils (in the branches where it's still maintained), and also
need to add a similar workaround to distlib? (With the later needing
to land before or together with manylinux1 enablement.)
-n
--
Nathaniel J. Smith -- https://vorpus.org
I've just released version 0.2.2 of distlib on PyPI [1]. For newcomers,
distlib is a library of packaging functionality which is intended to be
usable as the basis for third-party packaging tools.
The main changes in this release are as follows:
* Fixed issue #81: Added support for detecting distributions installed by
wheel versions >= 0.23 (which use metadata.json rather than pydist.json).
* Updated default PyPI URL to https://pypi.python.org/pypi
* Updated to use different formatting for description field for V1.1
metadata.
* Corrected “classifier” to “classifiers” in the mapping for V1.0 metadata.
* Improved support for Jython when quoting executables in output scripts.
* Fixed issue #77: Made the internal URL used for extended metadata fetches
configurable via a module attribute.
* Fixed issue #78: Improved entry point parsing to handle leading spaces in
ini-format files.
A more detailed change log is available at [2].
Please try it out, and if you find any problems or have any suggestions for
improvements, please give some feedback using the issue tracker! [3]
Regards,
Vinay Sajip
[1] https://pypi.python.org/pypi/distlib/0.2.2
[2] https://goo.gl/M3kQzR
[3] https://bitbucket.org/pypa/distlib/issues/new
Bah, offlist by mistake.
---------- Forwarded message ----------
From: Robert Collins <robertc(a)robertcollins.net>
Date: 30 January 2016 at 09:25
Subject: Re: [Distutils] How to get pip to really, really, I mean it
-- rebuild this damn package!
To: Chris Barker - NOAA Federal <chris.barker(a)noaa.gov>
Please try pip 7.1? Latest before 8; we're not meant to be caching
wheels of by-location things from my memory, but it may have
regressed/changed with the cache changes made during the 8 development
cycle.
-Rob
On 30 January 2016 at 04:48, Chris Barker - NOAA Federal
<chris.barker(a)noaa.gov> wrote:
>>> Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3 from
>>> file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in
>>> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
>>
>> I think this is saying that pip thinks it has found an
>> already-installed version of gsw 3.0.3 in sys.path, and that the
>> directory in your sys.path where it's already installed is
>>
>> /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
>
> That is the temp dir conda sets up to unpack downloaded files, and do
> its work in -- hence the name. I'll look and see what's there. I'm
> pretty sure conda build starts out with an empty dir, however. And
> that dir should not be on sys.path.
>
>> I think this means that that directory is (a) in sys.path, and (b)
>> contains a .egg-info/.dist-info directory for gsw 3.0.3. Part (a)
>> seems weird and broken.
>
> Indeed. And I get the same symptoms with a clean environment that I've
> set up outside conda build. Though with the same source dir. But with
> conda build, it's a fresh unpack of the tarball.
>
>> Do you have "." in your PYTHONPATH or anything like that?
>
> God no!
>
>> Don't know why it seems to be building a wheel for it, if it already
>> thinks that it's installed... this is also odd.
>
> Yes it is. But it doesn't install it :-(
>
>>
>> $PYTHON -m pip install --no-cache-dir --upgrade --force-reinstall ./
>>
>> ? Though I'd think that -I would have the same affect as --force-reinstall...
>>
> So did I, and I think I tried --force-reinstall already, but I will again.
>
>> (It doesn't look like the cache dir is your problem here, but you do
>> probably want to use --no-cache-dir anyway just as good practice, just
>> because you don't want to accidentally package up a stale version of
>> the software that got pulled out of your cache instead of the version
>> you thought you were packaging in the tree in front of you.
>
> Exactly. Doesn't seem to make a difference, though.
>
>> Also, I think it's a bug in pip that it caches builds of source trees
>> -- PyPI can enforce the rule that each (package name, version number)
>> sdist is unique, but for a work-in-progress VCS checkout it's just not
>> true that (package name, version number) uniquely identifies a
>> snapshot of the whole tree. So in something like 'pip install .', then
>> requirement resolution code should treat this as a special requirement
>> that it wants *this exact tree*, not just any package that has the
>> same (package name, version number) as this tree; and the resulting
>> wheel should not be cached.
>
> Absolutely! In fact, I'll bet that approach is the source of the
> problem here. If not automagically, there should be a flag, at least.
>
> However, what seems to be happening is that pip is looking outside the
> current Python environment somewhere to see if this package needs to
> be installed. It may be something that works with virtualenv, but
> doesn't with conda environments for some reason.
>
> I guess on some level pip simply isn't designed to build and install
> from local source :-(
>
> In the end, I'm still confused: does pip install give me anything that:
>
> setup.py install single-version-externally-managed
>
> Doesn't? Other that support for non-setuptools installs, anyway.
>
> CHB
>
>
>> I don't know if there are any bugs filed
>> in pip on this...)
>>
>> -n
>>
>> --
>> Nathaniel J. Smith -- https://vorpus.org
> _______________________________________________
> Distutils-SIG maillist - Distutils-SIG(a)python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
--
Robert Collins <rbtcollins(a)hpe.com>
Distinguished Technologist
HP Converged Cloud
--
Robert Collins <rbtcollins(a)hpe.com>
Distinguished Technologist
HP Converged Cloud
Hi,
This PEP is an updated version of the draft manylinux1 PEP posted to this
list a couple days
ago by Nathaniel and myself. The changes reflect the discussion on the list
(thanks to everyone
for all of the feedback), and generally go to the clarity and precision of
the text.
HTML version: https://github.com/manylinux/manylinux/blob/master/pep-513.rst
-Robert
----
PEP: 513
Title: A Platform Tag for Portable Linux Built Distributions
Version: $Revision$
Last-Modified: $Date$
Author: Robert T. McGibbon <rmcgibbo(a)gmail.com>, Nathaniel J. Smith
<njs(a)pobox.com>
BDFL-Delegate: Nick Coghlan <ncoghlan(a)gmail.com>
Discussions-To: Distutils SIG <distutils-sig(a)python.org>
Status: Draft
Type: Informational
Content-Type: text/x-rst
Created: 19-Jan-2016
Post-History: 19-Jan-2016, 25-Jan-2016
Abstract
========
This PEP proposes the creation of a new platform tag for Python package built
distributions, such as wheels, called ``manylinux1_{x86_64,i386}`` with
external dependencies limited to a standardized, restricted subset of
the Linux kernel and core userspace ABI. It proposes that PyPI support
uploading and distributing wheels with this platform tag, and that ``pip``
support downloading and installing these packages on compatible platforms.
Rationale
=========
Currently, distribution of binary Python extensions for Windows and OS X is
straightforward. Developers and packagers build wheels [1]_ [2]_, which are
assigned platform tags such as ``win32`` or ``macosx_10_6_intel``, and upload
these wheels to PyPI. Users can download and install these wheels using tools
such as ``pip``.
For Linux, the situation is much more delicate. In general, compiled Python
extension modules built on one Linux distribution will not work on other Linux
distributions, or even on different machines running the same Linux
distribution with different system libraries installed.
Build tools using PEP 425 platform tags [3]_ do not track information about the
particular Linux distribution or installed system libraries, and instead assign
all wheels the too-vague ``linux_i386`` or ``linux_x86_64`` tags. Because of
this ambiguity, there is no expectation that ``linux``-tagged built
distributions compiled on one machine will work properly on another, and for
this reason, PyPI has not permitted the uploading of wheels for Linux.
It would be ideal if wheel packages could be compiled that would work on *any*
linux system. But, because of the incredible diversity of Linux systems -- from
PCs to Android to embedded systems with custom libcs -- this cannot
be guaranteed in general.
Instead, we define a standard subset of the kernel+core userspace ABI that,
in practice, is compatible enough that packages conforming to this standard
will work on *many* linux systems, including essentially all of the desktop
and server distributions in common use. We know this because there are
companies who have been distributing such widely-portable pre-compiled Python
extension modules for Linux -- e.g. Enthought with Canopy [4]_ and Continuum
Analytics with Anaconda [5]_.
Building on the compability lessons learned from these companies, we thus
define a baseline ``manylinux1`` platform tag for use by binary Python
wheels, and introduce the implementation of preliminary tools to aid in the
construction of these ``manylinux1`` wheels.
Key Causes of Inter-Linux Binary Incompatibility
================================================
To properly define a standard that will guarantee that wheel packages meeting
this specification will operate on *many* linux platforms, it is necessary to
understand the root causes which often prevent portability of pre-compiled
binaries on Linux. The two key causes are dependencies on shared libraries
which are not present on users' systems, and dependencies on particular
versions of certain core libraries like ``glibc``.
External Shared Libraries
-------------------------
Most desktop and server linux distributions come with a system package manager
(examples include ``APT`` on Debian-based systems, ``yum`` on
``RPM``-based systems, and ``pacman`` on Arch linux) that manages, among other
responsibilities, the installation of shared libraries installed to system
directories such as ``/usr/lib``. Most non-trivial Python extensions will depend
on one or more of these shared libraries, and thus function properly only on
systems where the user has the proper libraries (and the proper
versions thereof), either installed using their package manager, or installed
manually by setting certain environment variables such as ``LD_LIBRARY_PATH``
to notify the runtime linker of the location of the depended-upon shared
libraries.
Versioning of Core Shared Libraries
-----------------------------------
Even if the developers a Python extension module wish to use no
external shared libraries, the modules will generally have a dynamic runtime
dependency on the GNU C library, ``glibc``. While it is possible, statically
linking ``glibc`` is usually a bad idea because certain important C functions
like ``dlopen()`` cannot be called from code that statically links ``glibc``. A
runtime shared library dependency on a system-provided ``glibc`` is unavoidable
in practice.
The maintainers of the GNU C library follow a strict symbol versioning scheme
for backward compatibility. This ensures that binaries compiled against an older
version of ``glibc`` can run on systems that have a newer ``glibc``. The
opposite is generally not true -- binaries compiled on newer Linux
distributions tend to rely upon versioned functions in ``glibc`` that are not
available on older systems.
This generally prevents wheels compiled on the latest Linux distributions
from being portable.
The ``manylinux1`` policy
=========================
For these reasons, to achieve broad portability, Python wheels
* should depend only on an extremely limited set of external shared
libraries; and
* should depend only on "old" symbol versions in those external shared
libraries; and
* should depend only on a widely-compatible kernel ABI.
To be eligible for the ``manylinux1`` platform tag, a Python wheel must
therefore both (a) contain binary executables and compiled code that links
*only* to libraries (other than the appropriate ``libpython`` library, which is
always a permitted dependency consistent with the PEP 425 ABI tag) with SONAMEs
included in the following list: ::
libpanelw.so.5
libncursesw.so.5
libgcc_s.so.1
libstdc++.so.6
libm.so.6
libdl.so.2
librt.so.1
libcrypt.so.1
libc.so.6
libnsl.so.1
libutil.so.1
libpthread.so.0
libX11.so.6
libXext.so.6
libXrender.so.1
libICE.so.6
libSM.so.6
libGL.so.1
libgobject-2.0.so.0
libgthread-2.0.so.0
libglib-2.0.so.0
and (b), work on a stock CentOS 5.11 [6]_ system that contains the system
package manager's provided versions of these libraries.
Because CentOS 5 is only available for x86_64 and i386 architectures,
these are the only architectures currently supported by the ``manylinux1``
policy.
On Debian-based systems, these libraries are provided by the packages ::
libncurses5 libgcc1 libstdc++6 libc6 libx11-6 libxext6
libxrender1 libice6 libsm6 libgl1-mesa-glx libglib2.0-0
On RPM-based systems, these libraries are provided by the packages ::
ncurses libgcc libstdc++ glibc libXext libXrender
libICE libSM mesa-libGL glib2
This list was compiled by checking the external shared library dependencies of
the Canopy [4]_ and Anaconda [5]_ distributions, which both include a wide array
of the most popular Python modules and have been confirmed in practice to work
across a wide swath of Linux systems in the wild.
Many of the permitted system libraries listed above use symbol versioning
schemes for backward compatibility. The latest symbol versions provided with
the CentOS 5.11 versions of these libraries are: ::
GLIBC_2.5
CXXABI_3.4.8
GLIBCXX_3.4.9
GCC_4.2.0
Therefore, as a consequence of requirement (b), any wheel that depends on
versioned symbols from the above shared libraries may depend only on symbols
with the following versions: ::
GLIBC <= 2.5
CXXABI <= 3.4.8
GLIBCXX <= 3.4.9
GCC <= 4.2.0
These recommendations are the outcome of the relevant discussions in January
2016 [7]_, [8]_.
Note that in our recommendations below, we do not suggest that ``pip``
or PyPI should attempt to check for and enforce the details of this
policy (just as they don't check for and enforce the details of
existing platform tags like ``win32``). The text above is provided (a)
as advice to package builders, and (b) as a method for allocating
blame if a given wheel doesn't work on some system: if it satisfies
the policy above, then this is a bug in the spec or the installation
tool; if it does not satisfy the policy above, then it's a bug in the
wheel. One useful consequence of this approach is that it leaves open
the possibility of further updates and tweaks as we gain more
experience, e.g., we could have a "manylinux 1.1" policy which targets
the same systems and uses the same ``manylinux1`` platform tag (and
thus requires no further changes to ``pip`` or PyPI), but that adjusts
the list above to remove libraries that have turned out to be
problematic or add libraries that have turned out to be safe.
Compilation of Compliant Wheels
===============================
The way glibc, libgcc, and libstdc++ manage their symbol versioning
means that in practice, the compiler toolchains that most developers
use to do their daily work are incapable of building
``manylinux1``-compliant wheels. Therefore we do not attempt to change
the default behavior of ``pip wheel`` / ``bdist_wheel``: they will
continue to generate regular ``linux_*`` platform tags, and developers
who wish to use them to generate ``manylinux1``-tagged wheels will
have to change the tag as a second post-processing step.
To support the compilation of wheels meeting the ``manylinux1`` standard, we
provide initial drafts of two tools.
Docker Image
------------
The first tool is a Docker image based on CentOS 5.11, which is recommended as
an easy to use self-contained build box for compiling ``manylinux1`` wheels
[9]_. Compiling on a more recently-released linux distribution will generally
introduce dependencies on too-new versioned symbols. The image comes with a
full compiler suite installed (``gcc``, ``g++``, and ``gfortran`` 4.8.2) as
well as the latest releases of Python and ``pip``.
Auditwheel
----------
The second tools is a command line executable called ``auditwheel`` [10]_ that
may aid in package maintainers in dealing with third-party external
dependencies.
There are at least three methods for building wheels that use third-party
external libraries in a way that meets the above policy.
1. The third-party libraries can be statically linked.
2. The third-party shared libraries can be distributed in
separate packages on PyPI which are depended upon by the wheel.
3. The third-party shared libraries can be bundled inside the wheel
libraries, linked with a relative path.
All of these are valid option which may be effectively used by different
packages and communities. Statically linking generally requires
package-specific modifications to the build system, and distributing
third-party dependencies on PyPI may require some coordination of the
community of users of the package.
As an often-automatic alternative to these options, we introduce ``auditwheel``.
The tool inspects all of the ELF files inside a wheel to check for
dependencies on versioned symbols or external shared libraries, and verifies
conformance with the ``manylinux1`` policy. This includes the ability to add
the new platform tag to conforming wheels. More importantly, ``auditwheel`` has
the ability to automatically modify wheels that depend on external shared
libraries by copying those shared libraries from the system into the wheel
itself, and modifying the appropriate ``RPATH`` entries such that these
libraries will be picked up at runtime. This accomplishes a similar result as
if the libraries had been statically linked without requiring changes to the
build system. Packagers are advised that bundling, like static linking, may
implicate copyright concerns.
Bundled Wheels on Linux
=======================
While we acknowledge many approaches for dealing with third-party library
dependencies within ``manylinux1`` wheels, we recognize that the ``manylinux1``
policy encourages bundling external dependencies, a practice
which runs counter to the package management policies of many linux
distributions' system package managers [11]_, [12]_. The primary purpose of
this is cross-distro compatibility. Furthermore, ``manylinux1`` wheels on PyPI
occupy a different niche than the Python packages available through the
system package manager.
The decision in this PEP to encourage departure from general Linux distribution
unbundling policies is informed by the following concerns:
1. In these days of automated continuous integration and deployment
pipelines, publishing new versions and updating dependencies is easier
than it was when those policies were defined.
2. ``pip`` users remain free to use the ``"--no-binary"`` option if they want
to force local builds rather than using pre-built wheel files.
3. The popularity of modern container based deployment and "immutable
infrastructure" models involve substantial bundling at the application
layer anyway.
4. Distribution of bundled wheels through PyPI is currently the norm for
Windows and OS X.
5. This PEP doesn't rule out the idea of offering more targeted binaries for
particular Linux distributions in the future.
The model described in this PEP is most ideally suited for cross-platform
Python packages, because it means they can reuse much of the
work that they're already doing to make static Windows and OS X wheels. We
recognize that it is less optimal for Linux-specific packages that might
prefer to interact more closely with Linux's unique package management
functionality and only care about targeting a small set of particular distos.
Security Implications
---------------------
One of the advantages of dependencies on centralized libraries in Linux is
that bugfixes and security updates can be deployed system-wide, and
applications which depend on these libraries will automatically feel the
effects of these patches when the underlying libraries are updated. This can
be particularly important for security updates in packages engaged in
communication across the network or cryptography.
``manylinux1`` wheels distributed through PyPI that bundle security-critical
libraries like OpenSSL will thus assume responsibility for prompt updates in
response disclosed vulnerabilities and patches. This closely parallels the
security implications of the distribution of binary wheels on Windows that,
because the platform lacks a system package manager, generally bundle their
dependencies. In particular, because it lacks a stable ABI, OpenSSL cannot be
included in the ``manylinux1`` profile.
Platform Detection for Installers
=================================
Above, we defined what it means for a *wheel* to be
``manylinux1``-compatible. Here we discuss what it means for a *Python
installation* to be ``manylinux1``-compatible. In particular, this is
important for tools like ``pip`` to know when deciding whether or not
they should consider ``manylinux1``-tagged wheels for installation.
Because the ``manylinux1`` profile is already known to work for the
many thousands of users of popular commercial Python distributions, we
suggest that installation tools should error on the side of assuming
that a system *is* compatible, unless there is specific reason to
think otherwise.
We know of three main sources of potential incompatibility that are likely to
arise in practice:
* A linux distribution that is too old (e.g. RHEL 4)
* A linux distribution that does not use ``glibc`` (e.g. Alpine Linux, which is
based on musl ``libc``, or Android)
* Eventually, in the future, there may exist distributions that break
compatibility with this profile
To handle the first two cases, we propose the following simple and reliable
check: ::
def have_glibc_version(major, minimum_minor):
import ctypes
process_namespace = ctypes.CDLL(None)
try:
gnu_get_libc_version = process_namespace.gnu_get_libc_version
except AttributeError:
# We are not linked to glibc.
return False
gnu_get_libc_version.restype = ctypes.c_char_p
version_str = gnu_get_libc_version()
# py2 / py3 compatibility:
if not isinstance(version_str, str):
version_str = version_str.decode("ascii")
version = [int(piece) for piece in version_str.split(".")]
assert len(version) == 2
if major != version[0]:
return False
if minimum_minor > version[1]:
return False
return True
# CentOS 5 uses glibc 2.5.
is_manylinux1_compatible = have_glibc_version(2, 5)
To handle the third case, we propose the creation of a file
``/etc/python/compatibility.cfg`` in ConfigParser format, with sample
contents: ::
[manylinux1]
compatible = true
where the supported values for the ``manylinux1.compatible`` entry are the
same as those supported by the ConfigParser ``getboolean`` method.
The proposed logic for ``pip`` or related tools, then, is:
0) If ``distutils.util.get_platform()`` does not start with the string
``"linux"``, then assume the current system is not ``manylinux1``
compatible.
1) If ``/etc/python/compatibility.conf`` exists and contains a ``manylinux1``
key, then trust that.
2) Otherwise, if ``have_glibc_version(2, 5)`` returns true, then assume the
current system can handle ``manylinux1`` wheels.
3) Otherwise, assume that the current system cannot handle ``manylinux1``
wheels.
PyPI Support
============
PyPI should permit wheels containing the ``manylinux1`` platform tag to be
uploaded. PyPI should not attempt to formally verify that wheels containing
the ``manylinux1`` platform tag adhere to the ``manylinux1`` policy described
in this document. This verification tasks should be left to other tools, like
``auditwheel``, that are developed separately.
Rejected Alternatives
=====================
One alternative would be to provide separate platform tags for each Linux
distribution (and each version thereof), e.g. ``RHEL6``, ``ubuntu14_10``,
``debian_jessie``, etc. Nothing in this proposal rules out the possibility of
adding such platform tags in the future, or of further extensions to wheel
metadata that would allow wheels to declare dependencies on external
system-installed packages. However, such extensions would require substantially
more work than this proposal, and still might not be appreciated by package
developers who would prefer not to have to maintain multiple build environments
and build multiple wheels in order to cover all the common Linux distributions.
Therefore we consider such proposals to be out-of-scope for this PEP.
Future updates
==============
We anticipate that at some point in the future there will be a
``manylinux2`` specifying a more modern baseline environment (perhaps
based on CentOS 6), and someday a ``manylinux3`` and so forth, but we
defer specifying these until we have more experience with the initial
``manylinux1`` proposal.
References
==========
.. [1] PEP 0427 -- The Wheel Binary Package Format 1.0
(https://www.python.org/dev/peps/pep-0427/)
.. [2] PEP 0491 -- The Wheel Binary Package Format 1.9
(https://www.python.org/dev/peps/pep-0491/)
.. [3] PEP 425 -- Compatibility Tags for Built Distributions
(https://www.python.org/dev/peps/pep-0425/)
.. [4] Enthought Canopy Python Distribution
(https://store.enthought.com/downloads/)
.. [5] Continuum Analytics Anaconda Python Distribution
(https://www.continuum.io/downloads)
.. [6] CentOS 5.11 Release Notes
(https://wiki.centos.org/Manuals/ReleaseNotes/CentOS5.11)
.. [7] manylinux-discuss mailing list discussion
(https://groups.google.com/forum/#!topic/manylinux-discuss/-4l3rrjfr9U)
.. [8] distutils-sig discussion
(https://mail.python.org/pipermail/distutils-sig/2016-January/027997.html)
.. [9] manylinux1 docker image
(https://quay.io/repository/manylinux/manylinux)
.. [10] auditwheel tool
(https://pypi.python.org/pypi/auditwheel)
.. [11] Fedora Bundled Software Policy
(https://fedoraproject.org/wiki/Bundled_Software_policy)
.. [12] Debian Policy Manual -- 4.13: Convenience copies of code
(https://www.debian.org/doc/debian-policy/ch-source.html#s-embeddedfiles)
Copyright
=========
This document has been placed into the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:
Context:
I'm maintaining a number of conda packages of various packages, some of
which are mine, some others, some pure python, some extensions, etc.
The way conda build works is you specify some meta data, and a build
script(s), and conda:
sets up an isolated environment in which to build.
installs the build dependencies
runs teh build script
see's what got installed, and makes a package of it.
(there are complications, but that's the idea)
so what to do in the build script for a python package? the simple anser is:
$PYTHON setup.py install
But then you get those god- awful eggs, or if it's not a setuptools built
package, you don't get the right meta data for pip, etc. to resolve
dependencies.
[NOTE: I do want all the pip compatible meta data, otherwise, you have pip
trying to re-instll stuff, etc if someone does install something with pip,
or pip in editable mode, or...]
so some of us have started doing:
$PYTHON setup.py install --single-version-externally-managed --record
record.txt
Which mostly seems to work -- though that is a God-awful command line to
remember....
And it fails if the package has a plain old distuitls-based setup.py
so I started going with:
$PYTHON -m pip install ./
and that seemed to work for awhile for me. However, I've been having
problems lately with pip not bulding and re-installing the package. This is
really weird, as the conda build environment is a clean environment, there
really isn't a package already installed.
here is the log:
+ /Users/chris.barker/miniconda2/envs/_build/bin/python -m pip install -v ./
Processing /Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
Running setup.py (path:/tmp/pip-umxsOD-build/setup.py) egg_info for
package from file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
Running command python setup.py egg_info
Source in /tmp/pip-umxsOD-build has version 3.0.3, which satisfies
requirement gsw==3.0.3 from
file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
Requirement already satisfied (use --upgrade to upgrade): gsw==3.0.3 from
file:///Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3 in
/Users/chris.barker/miniconda2/conda-bld/work/gsw-3.0.3
Requirement already satisfied (use --upgrade to upgrade): numpy in
/Users/chris.barker/miniconda2/envs/_build/lib/python2.7/site-packages
(from gsw==3.0.3)
Requirement already satisfied (use --upgrade to upgrade): nose in
/Users/chris.barker/miniconda2/envs/_build/lib/python2.7/site-packages
(from gsw==3.0.3)
Building wheels for collected packages: gsw
Running setup.py bdist_wheel for gsw ... Destination directory:
/tmp/tmprPhOYkpip-wheel-
Running command /Users/chris.barker/miniconda2/envs/_build/bin/python -u
-c "import setuptools,
tokenize;__file__='/tmp/pip-umxsOD-build/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
bdist_wheel -d /tmp/tmprPhOYkpip-wheel- --python-tag cp27
done
Stored in directory:
/Users/chris.barker/Library/Caches/pip/wheels/51/4e/d7/b4cfa75866df9da00f4e4f8a9c5c35cfacfa9e92c4885ec5c4
Removing source in /tmp/pip-umxsOD-build
Successfully built gsw
Cleaning up...
You are using pip version 8.0.1, however version 8.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
So it seems to think it's already installed -- huh? what? IN any case, it
doesn't install anything. It looks like it's referencing some cache, or
manifest or something outside of the python environment itself. So if I
installed it in a different Anaconda environment, it gets confused here.
(BTW, I can replicate this behavior outside of conda build by creating a
new conda environment by hand, and trying to ues pip to build a package
locally)
So I tried various command-line options:
$PYTHON -m pip install -I -v --upgrade --no-deps ./
but no dice.
I also tried --no-cache-dir -- no change.
So how can I tell pip that I really do want it to bulid and install this
dran package from source, damn it!
Other option -- go back to:
$PYTHON setup.py install --single-version-externally-managed --record
record.txt
And have to fight with pip only for the non-setuptools packages. Does the
--single-version-externally-managedcommand do ayting different than pip?
Thanks,
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker(a)noaa.gov
This trinity release of devpi, the private packaging and workflow
system, is drop-in compatible to earlier releases and comes with these
improvements:
- support for pip search on the server side which is also configured
when "devpi use" writes to pip configuration files.
- explicit --offline-mode for devpi-server to avoid trying
unneccessary and potentially laggy network requests and to
streamline simple pages to only contain releases that are locally
cached. thanks Daniel Panteleit for the PR.
- push from root/pypi to other indexes works now.
Docs are to be found as usual at:
http://doc.devpi.net
This release brought to you mainly by Florian Schulze and me
and a few still unnamed sponsoring companies. Speaking of which,
if you need support, training, adjustments wrt packaging and
professional testing you may contact us through http://merlinux.eu.
You can also expect devpi-server-3.0 soon, a major new release which
is to bring improvements like generalized mirroring, storage
backends, speed and internal code cleanups.
best,
holger
devpi-server-2.6.0 (2016-1-29)
------------------------------
- fix issue262: new experimental option --offline-mode will prevent
devpi-server from even trying to perform network requests and it
also strip all non-local release files from the simple index.
Thanks Daniel Panteleit for the PR.
- fix issue304: mark devpi-server versions older than 2.2.x as incompatible
and requiring an import/export cycle.
- fix issue296: try to fetch files from master again when requested, if there
were checksum errors during replication.
- if a user can't be found during authentication (with ``setup.py upload`` for
example), then the http return code is now 401 instead of 404.
- fix issue293: push from root/pypi to another index is now supported
- fix issue265: ignore HTTP(S) proxies when checking if the server is
already running.
- Add ``content_type`` route predicate for use by plugins.
devpi-web-2.6.0 (2016-1-29)
---------------------------
- fix issue305: read documentation html files in binary and let BeautifulSoup
detect the encoding.
- require devpi-server >= 2.6.0
- support for ``pip search`` command on indexes
devpi-client-2.4.0 (2016-1-29)
------------------------------
- fix issue291: transfer file modes with vcs exports. Thanks Sergey
Vasilyev for the report.
- new option "--index" for "install", "list", "push", "remove", "upload" and
"test" which allows to use a different than the current index without using
"devpi use" before
- set ``index`` in ``[search]`` section of ``pip.cfg`` when writing cfgs, to
support ``pip search``
I've posted about this idea to the list before, but this time I've
finally started working on it and have a concrete plan to discuss :)
The basic idea:
* I want to progressively move the active interoperability
specifications out of PEPs and into a subsection of
packaging.python.org
* packaging PEPs would then become a tool for changing those
specifications, rather than the specifications themselves
* the description of this process would be captured along with the
rest of the PyPA operational docs at pypa.io
* there's a draft PR to start down this path at
https://github.com/pypa/pypa.io/pull/12
That PR provides an example of the potential benefits of this approach
- it's able to state which parts of PEP 345 have been superseded by
other PEPs, and also note the "Provides-Extra" field which exists as a
de facto standard, despite not being formally adopted through the PEP
process.
However, having written the draft PR entirely against pypa.io, I've
now gone back to thinking packaging.python.org would be a better fit
for the actual hosting of the agreed specifications - the "python.org"
domain is a vastly better known one than "pypa.io", and being on a
subdomain of python.org more clearly establishes these
interoperability specifications as the Python Packaging counterparts
to the Python Language Reference and Python Library Reference.
So my next iteration will be split into two PRs: one for pypa.io
defining the specification management process, and one for
packaging.python.org adding a specifications section
Once those changes are merged, we'll end up with additional lower
overhead ways to handle minor updates to the specifications: pure
clarifications can be handled through PRs and issues against
packaging.python.org, minor updates and/or updating the specifications
to match real world practices can be handled through a distutils-sig
discussion, while larger more complex (or more controversial) updates
will still need to go through the PEP process.
The additional background:
For folks that haven't used the PEP process to work on CPython itself,
here's the way that works:
- major or controversial proposals get a standards track PEP
- that gets debated/discussed on python-dev (perhaps with a
preliminary discussion on python-ideas or one of the SIGs)
- if Accepted, the relevant changes get made to CPython, including the
language and library reference
- the PEP is marked Final
- at this point, CPython and its docs are the source of authoritative
info, NOT the PEP
- future minor updates are handled as tracker issues, with the full
PEP process only invoked again for major or controversial changes
The key point there is that once the PEP is marked Final it becomes a
*historical document*, so there's no need to have a
meta-change-management process for the PEP itself.
It started out that distutils used PEPs at least in something
resembling the same way, since the standard library's distutils was
the reference implementation, and packaging standards evolved at the
same pace of the rest of the standard library. We broke that model
when we moved to using the independently developed pip and setuptools
as the reference implementations.
We've since been using the PEP process in a way a bit more like the
way IETF RFC's work, and I think we can all agree that's been a pretty
clumsy and horrible way to run things - the PEP process really wasn't
designed to be used that way, and it shows.
The approach I'm proposing we switch to gets us back to something much
closer to the way CPython uses the PEP process, which should help both
folks trying to figure out the *current* approaches to
interoperability handling, as well as those of us trying to work on
improvements to those standards.
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Hello!
As many of you are aware there has been an effort to replace the current PyPI with a new, improved PyPI. This project has been codenamed Warehouse and has been progressing nicely. However we’ve run into a bit of an issue when deciding what to support that we’re not feeling super qualified to make an informed decision on.
The new PyPI is going to support translated content (for the UI elements, not for what people upload to there), although we will not launch with any translations actually added besides English. Currently the translation engine we’re using (l20n.js) does not support anything but “Evergreen” browsers (browsers that constantly and automatically update) which means we don’t have support for older versions of IE. My question to anyone who is, or is familiar with places where English isn’t the native language, how big of a deal is this if we only support newer browsers for translations?
If you can weigh in on the issue for this (https://github.com/pypa/warehouse/issues/881) that would be great! If you know someone who might have a good insight, please pass this along to them as well.
Thanks!
-----------------
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA