On Wed, Feb 15, 2017 at 5:27 AM, Nick Coghlan
On 15 February 2017 at 12:58, Nathaniel Smith
wrote: On Wed, Feb 15, 2017 at 3:33 AM, Nick Coghlan
wrote: - "requires": list where entries are either a string containing a PEP 508 dependency specifier or else a hash map contain a "requires" key plus "extra" or "environment" fields as qualifiers - "integrates": replacement for "meta_requires" that only allows pinned dependencies (i.e. hash maps with "name" & "version" fields, or direct URL references, rather than a general PEP 508 specifier as a string)
What's accomplished by separating these? I really think we should strive to have fewer more orthogonal concepts whenever possible...
It's mainly a matter of incorporating https://caremad.io/posts/2013/07/setup-vs-requirement/ into the core data model, as this distinction between abstract development dependencies and concrete deployment dependencies is incredibly important for any scenario that involves publisher-redistributor-consumer chains, but is entirely non-obvious to folks that are only familiar with the publisher-consumer case that comes up during development-for-personal-and-open-source-use.
Maybe I'm just being dense but, umm. I don't know what any of these words mean :-). I'm not unfamiliar with redistributors; part of my confusion is that this is a concept that AFAIK distro package systems don't have. Maybe it would help if you have a concrete example of a scenario where they would benefit from having this distinction?
One particular area where this is problematic is in the widespread advice "always pin your dependencies" which is usually presented without the all important "for application or service deployment" qualifier. As a first approximation: pinning-for-app-or-service-deployment == good, pinning-for-local-testing == good, pinning-for-library-or-framework-publication-to-PyPI == bad.
pipenv borrows the Ruby solution to modeling this by having Pipfile for abstract dependency declarations and Pipfile.lock for concrete integration testing ones, so the idea here is to propagate that model to pydist.json by separating the "requires" field with abstract development dependencies from the "integrates" field with concrete deployment dependencies.
What's the benefit of putting this in pydist.json? I feel like for the usual deployment cases (a) going straight from Pipfile.lock -> venv is pretty much sufficient, with no need to put this into a package, but (b) if you really do want to put it into a package, then the natural approach would be to make an empty wheel like "my-django-app-deploy.whl" whose dependencies were the contents of Pipfile.lock. There's certainly a distinction to be made between the abstract dependencies and the exact locked dependencies, but to me the natural way to model that distinction is by re-using the distinction we already have been source packages and binary packages. The build process for this placeholder wheel is to "compile down" the abstract dependencies into concrete dependencies, and the resulting wheel encodes the result of this compilation. Again, no new concepts needed.
In the vast majority of publication-to-PyPi cases people won't need the "integrates" field, since what they're publishing on PyPI will just be their abstract dependencies, and any warning against using "==" will recommend using "~=" or ">=" instead. But there *are* legitimate uses of pinning-for-publication (like the PyObjC metapackage bundling all its subcomponents, or when building for private deployment infastructure), so there needs to be a way to represent "Yes, I'm pinning this dependency for publication, and I'm aware of the significance of doing so"
Why can't PyObjC just use regular dependencies? That's what distro metapackages have done for decades, right? -n -- Nathaniel J. Smith -- https://vorpus.org