Buzzword bingo in the subject...
Situation: I'm experimenting with docker, mostly in combination with
buildout. But it also applies to pip/virtualenv.
I build a docker container with a Dockerfile: install some .deb
packages, add the current directory as /code/, run buildout (or pip),
ready. Works fine.
Now local development: it is normal to mount the current directory as
/code/, so that now is overlayed over the originally-added-to-the-docker
This means that anything done inside /code/ is effectively discarded in
development. So a "bin/buildout" run has to be done again, because the
bin/, parts/, eggs/ etc directories are gone.
Same problem with a virtualenv. *Not* though when you run pip directly
and let it install packages globally! Those are installed outside of
/code in /usr/local/somewhere.
A comment and a question:
- Comment: "everybody" uses virtualenv, but be aware that it is
apparently normal *not* to use virtualenv when building dockers.
- Question: buildout, like virtualenv+pip, installs everything in the
current directory. Would an option to install it globally instead make
sense? I don't know if it is possible.
Reinout van Rees http://reinout.vanrees.org/
"Learning history by destroying artifacts is a time-honored atrocity"
If you want wheel to be successful, **provide a build server**.
Quoting the author of psutil:
On Linux / Unix the only way you have to install psutil right now is via source / tarball. I don't want to provide
wheels for Linux (or other UNIX platforms). I would have to cover all supported python versions (7) both 32 and 64 bits,
meaning 14 extra packages to compile and upload on PYPI on every release. I do that for Windows because installing VS is
an order of magnitude more difficult than installing gcc on Linux/UNIX but again: not willing to do extra work on that
What you could do is create a wheel yourself with python setup.py build bdist_wheel by using the same python/arch
version you have on the server, upload it on the server and install it with pip.
What do you think?
Thomas Guettler http://www.thomas-guettler.de/
As anyone here who has uploaded anything to PyPI recently is aware of, PyPI's
uploading routine has been in a semi broken state for awhile now. It will
regularly raise a 500 error (but will often still record the release, but
sometimes only a partial release).
Well, the good news is, that Warehouse's upload routines should generally be
good enough to use now. This will be more or less the same as if you uploaded
to PyPI itself (they share a backing data store) but hitting the newer, better
code base instead of the slowly decaying legacy code base. You can upload via
Warehouse by editing your ~/.pypirc file and making it look something like
Alternatively, you can ditch the [warehouse] section, and just totally switch
over to this by doing:
Then you can upload using ``twine upload -r warehouse dist/*`` or
``twine upload dist/*`` based upon which of the above options you picked. This
code is not as battle tested as PyPI itself is, so if you run into any bugs
please file an issue with https://github.com/pypa/warehouse. This code should
generally give a successful response all of the time (given the upload itself
was good) and properly utilizes database transactions so that it should
completely eliminate the cases where you get a partial upload.
Hopefully this will help solve some of the problems folks are having with PyPI
in the interim before we can completely switch over to Warehouse.
I'm writing a python-binding project for project A written in C++.
Project A is on github. It supports compiling itself to produce .so in
linux or .dll in windows.
My python-binding project contains depends on the .dll and .so file.
Now I want to register my package on pypi. So that users can install my
package with only running `pip install XXXX`.
I have to support both windows and linux. The only solution I can figure
out is to include both .dll and .so in my package. This will end up with
both .so and .dll installed in any platforms. It sounds dirty.
Is there any better ways to achieve my goal?
PS: The compile process of Project A is a little complicated, so I can't
use python Extension to build my dynamic library.
This question comes after this question
Thanks in advance :)
In #pypa-dev, I raised the possibility of moving our PyPA support channels from IRC to another hosted solution that enables persistence. Although IRC has served us well, there are systems now with clear feature advantages, which are crucial to my continuous participation:
- always-on experience; even if one’s device is suspended or otherwise offline.
- mobile support — the in-cloud experience is essential for low power and intermittently connected devices.
- push notifications allow a project leader to remain largely inactive in a channel, but attention raised promptly when users make a relevant mention.
- continuous, integrated logging for catching up on the conversation.
Both Gitter and Slack offer the experience I’m after, with Gitter feeling like a better fit for open-source projects (or groups of them).
I’ve tried using IRCCloud, and it provides a similar, suitable experience on the same IRC infrastructure, with one big difference. While Gitter and Slack offer the above features for free, IRCCloud requires a $5/user/month subscription (otherwise, connections are dropped after two hours). I did reach out to them to see if they could offer some professional consideration for contributors, but I haven’t heard from them. Furthermore, IRCCloud requires an additional account on top of the account required for Freenode.
In addition to the critical features above, Gitter and Slack offer other advantages:
- For Gitter, single-sign on using the same Github account for authentication and authorization means no extra accounts. Slack requires one new account.
- An elegant web-based interface as a first-class feature, a lower barrier of entry for users.
- Zero-install or config.
- Integration with source code and other systems.
It’s because of the limitations of these systems that I find myself rarely in IRC, only joining when I have a specific issue, even though I’d like to be permanently present.
Donald has offered to run an IRC bouncer for me, but such a bouncer is only a half-solution, not providing the push notifications, mobile apps (IRC apps exist, but just get disconnected, and often fail to connect on mobile provider networks), or integrated logging.
I note that both Gitter and Slack offer IRC interfaces, so those users who prefer their IRC workflow can continue to use that if they so choose.
I know there are other alternatives, like self-hosted solutions, but I’d like to avoid adding the burden of administering such a system. If someone wanted to take on that role, I’d be open to that alternative.
I’d like to propose we move #pypa-dev to /pypa/dev and #pypa to /pypa/support in gitter.
Personally, the downsides to moving to Gitter (other than enacting the move itself) seem negligible. What do you think? What downsides am I missing?
On Wed, May 25, 2016 at 1:24 PM Sylvain Corlay <sylvain.corlay(a)gmail.com>
> *2) On the need for something like pip.locations.distutils_scheme in
> When installing a python package that has a directive for the install_headers
> distutils command, these headers are usually installed under the main
> python include directory, which can be retrieved with
> sysconfig.get_path('include') or directly referred to as the 'include' string
> when setting the include directories of an extension module.
> However, on some systems like OS X, headers for extension modules are not
> located in under the python include directory
> but in
> Is there a generic way to find the location where headers are installed in
> a python install? pip.locations has a distutils_scheme function which seems
> to be returning the right thing, but it seems to be a bit overkill to
> require pip. On the other side, no path returned by sysconfig corresponds
> to `/usr/local/include/pythonX.Y`.
As a Homebrew maintainer this sounds like something that Homebrew
could influence. Are there any packages in the wild that use this
mechanism? It seems that headers are mostly installed beneath
site-packages. I don't have strong feelings about whether Homebrew
should have better support for install_headers or whether that would
be straightforward to implement but IIRC we've had no prior reports of
this causing trouble.
Just a heads up,
I do not intend to implement the “Use PyPI as an OpenID Provider” functionality in Warehouse. Looking at the actual usage of this, it appears that very few people have ever taken advantage of it (a total of 201 (user, site) combinations in the database of what people have set to “Allow Always”) and I don’t believe it’s worth continuing. Longer term, Python should get a centralized Sign on across all it’s web properties, which could re-implement this feature for folks who want it.
If you’re using this for anything, I recommend migrating off of it onto something else.
Warehouse Issue: https://github.com/pypa/warehouse/issues/60
I had some thoughts on documentation integration (and its deprecation in Warehouse), which I wrote up here: https://paper.dropbox.com/doc/Integrated-Docs-is-an-Essential-Feature-HEqnF…
I include the full text below for ease of access and response.
Yesterday, as I was migrating Setuptools from PyPI to Warehouse due to this issue<https://github.com/pypa/setuptools/issues/589>, Donald alerted me to the fact that Warehouse does not plan to support documentation hosting<https://github.com/pypa/setuptools/issues/604#issuecomment-223614048>, and as a result, integrated documentation for the packaging infrastructure is deprecated.
At first blush, this decision seems like a sound one - decouple independent operations and allow a mature, established organization like RTD<https://readthedocs.org/> to support and manage Python Package documentation. I have nothing but respect for RTD; I reference them here only as the prime example of a third-party doc hosting solution.
After spending most of a day working on getting just one project documentation (mostly) moved from PyPI to RTD, I’ve realized there are several shortcomings with this approach. Integrated hosting provides several benefits not offered by RTD:
* Uniform custody - the person or team that owns/maintains the package also owns/maintains the documentation with a single point of management and authorization.
* Shared credentials - the accounts used to administer the packages are re-used to authenticate authorized users to the documentation. RTD requires a separate set of accounts for each user involved with the documentation.
* Shared naming - a name registered as a package is automatically reserved for the documentation.
* Automatic linkage - PyPI provides a Package Documentation link on each package that has documentation.
* Control over the build process - although RTD does provide excellent hooks for customization and control, the process is essentially out of the hands of the operator. Thus when issues like this<https://github.com/rtfd/readthedocs.org/issues/2254> arise (probably rarely), the user is at the mercy of the system. With PyPI hosting, it was possible to manually build and upload docs when necessary and to control every aspect of the build process, including platform, Python version, etc.
* One obvious choice - although RTD today is the obvious choice for hosting, I can see other prominent alternatives - using Github pages or self hosting or perhaps another service will emerge that’s more integrated with the packaging process. Having a sanctioned, integrated documentation hosting gives users confidence that it’s at least a good default choice if not the best one.
* API access - Although RTD hopes to provide support for project creation through an API, it currently only allows querying through the public API. Therefore, it’s not feasible through tools to mechanically configure projects in RTD. Each project has to be manually configured and administered.
For me, this last limitation is the most onerous. I maintain dozens of projects, many of them in collaboration with other teams, and in many of those, I rely on a model implementation<https://github.com/jaraco/skeleton> that leverages PyPI hosting as part of the package release process to publish documentation. Moving each of these projects to another hosting service would require the manual creation and configuration of another project for each. As I consider the effort it would take to port all of these projects and maintain them in a new infrastructure, I’m inclined to drop documentation support for all but the most prominent projects.
The linkage provided by PyPI was a most welcome feature, and I’m really sad to see it go. I’d be willing to give up item 5 (Control) if the other items could be addressed.
On 3 Jun 2016 4:03 am, "Donald Stufft" <donald(a)stufft.io> wrote:
>> On Jun 2, 2016, at 9:16 PM, Nick Timkovich <prometheus235(a)gmail.com>
>> I can definitely believe there are more important things to do, but some
of us aren't versed in the intricacies of what's up top and don't have the
familiarity to dive in. Us GitHub plebs are just raring to work on a
feature we think is within our grasp ;-)
> Yup! Nick was speaking to why folks like myself haven’t done it yet- If
some enterprising person (perhaps you!) takes the time to write up a PEP
and work it through the process, then there’s no reason to wait for me (or
Nick, or any of the other core team) to do it :)
Right, the key part of my post is the second paragraph: we actually now
have a relatively simple mechanism to capture proposals like this one,
which is issues and pull requests against the "specifications" section of
the Python packaging user guide on GitHub.
Previously a key sticking point was not having a way to document added
fields without a full PEP, which imposed way too much overhead for adding a
simple attribute like Provides-Extra or the Description-Format field being
For this to work, I think the concrete changes you would need would be:
- Python packaging user guide to document the new field in the core
- setuptools to support setting it
- Warehouse to respect it for all defined rendering formats
- potentially legacy PyPI to support respecting it at least for
reStructured text and Markdown (treating others as plain text)
- PyPUG again to provide usage docs once the default tools support it
> Donald Stufft