[Distutils] Maintaining a curated set of Python packages

Wes Turner wes.turner at gmail.com
Fri Dec 9 00:27:43 EST 2016


On Thursday, December 8, 2016, Nick Coghlan <ncoghlan at gmail.com> wrote:

> Putting the conclusion first, I do see value in better publicising
> "Recommended libraries" based on some automated criteria like:
>
> - recommended in the standard library documentation
> - available via 1 or more cross-platform commercial Python redistributors
> - available via 1 or more Linux distro vendors
> - available via 1 or more web service development platforms
>
>
So these would be attributes tracked by a project maintainer and verified
by the known-good-set maintainer? Or?

(Again, here I reach for JSONLD. "count n" is only so useful; *which*
{[re]distros, platforms, heartfelt testimonials from incredible experts}
URLs )

- test coverage
- seclist contact info AND procedures
- more than one super admin maintainer
- what other criteria should/could/can we use to vet open source libraries?


> That would be a potentially valuable service for folks new to the
> world of open source that are feeling somewhat overwhelmed by the
> sheer number of alternatives now available to them.
>
> However, I also think that would better fit in with the aims of an
> open source component tracking community like libraries.io than it
> does a publisher-centric community like distutils-sig.


IDK if libraries are really in scope for stackshare. The feature
upcoming/down voting is pretty cool.

https://stackshare.io/python


>
> The further comments below are just a bit more background on why I
> feel the integration testing aspect of the suggestion isn't likely to
> be particularly beneficial :)


A catch-all for testing bits from application-specific integration test
suites could be useful (and would likely require at least docker-compose,
dox, kompose for working with actual data stores)


>
> On 9 December 2016 at 01:10, Barry Warsaw <barry at python.org <javascript:;>>
> wrote:
> > Still, there may be value in inter-Python package compatibility tests,
> but
> > it'll take serious engineering effort (i.e. $ and time), ongoing
> maintenance,
> > ongoing effort to fix problems, and tooling to gate installability of
> failing
> > packages (with overrides for downstreams which don't care or already
> expend
> > such effort).
>
> I think this is really the main issue, as both desktop and server
> environments are moving towards the integrated platform + isolated
> applications approach popularised by mobile devices.
>
> That means we end up with two very different variants of automated
> integration testing:
>
> - the application focused kind offered by the likes of requires.io and
> pyup.io (i.e. monitor for dependency updates, submit PRs to trigger
> app level CI)
> - the platform focused kind employed by distro vendors (testing all
> the platform components work together, including the app isolation
> features)
>
> The first kind makes sense if you're building something that runs *on*
> platforms (Docker containers, Snappy or FlatPak apps, web services,
> mobile apps, etc).
>
> The second kind inevitably ends up intertwined with the component
> review and release engineering systems of the particular platform, so
> it becomes really hard to collaborate cross-platform outside the
> context of specific projects like OpenStack that provide clear
> definitions for "What components do we collectively depend on that we
> need to test together?" and "What does 'working' mean in the context
> of this project?".
>
> Accordingly, for an initiative like this to be successful, it would
> need to put some thought up front into the questions of:
>
> 1. Who are the intended beneficiaries of the proposal?
> 2. What problem does it address that will prompt them to contribute
> time and/or money to solving it?
> 3. What do we expect people to be able to *stop doing* if the project
> proves successful?
>
> For platform providers, a generic "stdlib++" project wouldn't really
> reduce the amount of integration testing we'd need to do ourselves (we
> already don't test arbitrary combinations of dependencies, just the
> ones we provide at any given point in time).
>
> For application and service developers, the approach of pinning
> dependencies to specific versions and treating updates like any other
> source code change already works well in most cases.
>
> That leaves library and framework developers, who currently tend to
> adopt the policy of "for each version of Python that we support, we
> test against the latest versions of our dependencies that were
> available at the time we ran the test", leaving testing against older
> versions to platform providers. If there's a key related framework
> that also provides LTS versions (e.g. Django), then some folks may add
> that to their test matrix as well.
>
> In that context, "Only breaks backwards compatibility for compelling
> reasons" becomes a useful long term survival trait for libraries and
> frameworks, as gratuitous breakages are likely to lead to people
> migrating away from particularly unreliable dependencies.


Sometimes, when there are no active maintainers for a feature, it makes
sense to deprecate and/or split that functionality / untested cruft out to
a different package.


>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com <javascript:;>   |   Brisbane,
> Australia
> _______________________________________________
> Distutils-SIG maillist  -  Distutils-SIG at python.org <javascript:;>
> https://mail.python.org/mailman/listinfo/distutils-sig
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/distutils-sig/attachments/20161208/032b9898/attachment.html>


More information about the Distutils-SIG mailing list