Is there a way for a package to recognize that its content clashes with that of another package? This can happen when a package
becomes unmaintained and another differently named package takes over with perhaps clashing modules/__package__ paths.
I have a Python project that depends on an external C library (which is
unrelated to Python and NOT part of the OS).
For an sdist, this is easy: my setup.py assumes that the library is
pre-installed somewhere on the system where setuptools can find it.
However, is there a standard solution for packaging such a project as
wheel? Ideally, the project should "just work" when doing pip install on
it, which means that the external library should somehow be bundled in
I know that for example numpy does that, but it also has a very
complicated build system (containing a fork of distutils!).
Does anybody know any pointers for this? Or if you think that this is a
minefield which is not really supported, feel free to say so.
This was opened as an issue (https://github.com/pypa/pip/issues/4032) some years ago, but it was more recently recommended to open a thread here and that issue was closed due to inactivity, so here goes.
pip currently rewrites sys.executable in the shebang line of scripts to be installed from wheels. This is necessary for portability, envs, etc. On the far opposite end of the spectrum, conda finds and rewrites the install prefix in any file anywhere in the package (including binaries). In IPython/Jupyter (specifically ipykernel), we have a file that we want to install that is a data_file in `share/jupyter/kernels/python3/kernel.json`. Like scripts, this file contains a reference to sys.executable, and like scripts, this should be rewritten at install time, since the wheel doesn't know the right value at build time. However, the sys.executable rewrite from pip is not configurable or accessible, so only scripts are allowed to have this modification. It would be useful to us to be able to opt-in to this modification for data_files as well.
Our current choices are:
1. disable wheels for our package because we need to know sys.executable to create these files, and thus must rely on arbitrary code execution to get it right
2. use `python` and add special runtime-handling (this is what we do, and it can be wrong, e.g. when sys.executable at runtime is not actually the sys.executable used for installation, exactly the problem script shebang rewrite solves for scripts)
3. don't install the kernelspec with the package and tell users that proper installation is a two step process: `pip install ipykernel; ipython kernel install --sys-prefix`. We used to do this, and it caused lots of problems due to uninstall/upgrade causing the package and kernelspec to fall out of sync since part of the package is not managed by the package manager.
A "works for us" version of this could be to allow specifying that a given data_file is a script (or entrypoint) and should be treated as one. This would allow us to place a script next to our json file in a way that it is found by our kernelspec mechanism. This isn't as general a solution, but it works for us without needing to generalize sys.executable (or sys.prefix) handling as much to locations in files other than the shebang, it only needs generalizing the installation location of scripts.
Patching 3rd party repositories comes up every so often at $WORK and one of the things we do is build a local version that is generally some released version + a couple of local patches.
We follow the usual version scheme of using the public version identifier + local version label.
The steps are as follows:
python setup.py egg_info -b "+local<build number>" sdist bdist_wheel
twine upload dist/*.whl
We only really care about the wheel though, and for all of our other dependencies we basically just run:
pip wheel <some pip valid URL>
and upload it to our local pypi instance.
This also allows us to directly build from Github repositories.
With for example:
pip wheel git+https://github.com/org/somerepo.git@ourpatchset#egg=somerepo <git+https://github.com/org/somerepo.git@ourpatchset#egg=somerepo>
However we would like to add the build number so that when a new release is provided upstream we can easily update our local repository (or preferably we have upstreamed our patch in the meantime) and build a new release with our patches.
Is there some way to influence the egg info for the build when using pip wheel? Some thing like:
pip wheel git+https://github.com/org/somerepo.git@ourpatchset#egg=somerepo&egg_info= <git+https://github.com/org/somerepo.git@ourpatchset#egg=somerepo&egg_info=>"-b +local<build number>"
Or is there a better method for dealing with this scenario?
Bert JW Regeer
When in the "Download files" section of a project on PyPI, next to each download there is a convenient "SHA256" link that will copy the SHA-256 fingerprint for that file to the clipboard. I am wondering if there is a programmatic way to access the SHA-256 for a file (besides just scraping the web page)? Ideally there would be some way to construct a URL based on the name of the file that, when called, would return the fingerprint.
I recently found myself installing a node.js package, and in the process
noticed that (sometime recently?) it started automatically warning about
known vulnerabilities during installation of package.jsons (see
At work, we run safety (https://pypi.org/project/safety/) on all our
projects (which has both free and paid versions). It's great.
I know there's a ton of wonderful work happening at the minute to improve
underlying scaffolding + specification to enable tools other than
setuptools + pip to thrive, so maybe this is the wrong moment, but I
figured I'd ask anyways :) -- what are opinions on running a similar thing
during pip install?
I wanted to check if the packages available on Pypi.org are scanned for any security vulnerabilities or not, can you please confirm.
My concern is how do you control if someone uploads a malicious code on Github
Data Scientist, Data and Analytics
This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster(a)equifax.com. Equifax® is a registered trademark of Equifax Inc. All rights reserved.
For those of you who participated in the PEP 517 discussion during the
summer of 2017 (just prior to its provisional acceptance), I want to flag
that one of the issues discussed back then has now resurfaced for
discussion. This is because the feature was turned on by default in pip's
latest release (19.0) less than a week ago.
The issue is the one around whether the source directory should be included
in sys.path. The resurfacing is more or less as predicted. For example, in
one email  from August 29, 2017, Nick summarized the state of things by
saying (not too long before provisional acceptance)--
So I think we can deem this one resolved in favour of "Frontends must
> ensure the current directory is *NOT* on sys.path before importing the
> designated backend", as starting there will mean we maximise our chances of
> learning something new as part of the initial rollout of the provisionally
> accepted API.
2. If omitting it is genuinely a problem, we'll likely find out soon enough
> as part of implementing a setup.py backend
Basically, that "soon enough" moment has arrived, and at least one
discussion on how to resolve it has started on the tracker here:
There is another discussion here, but it's probably better for the
discussion to be in one spot: