I am fairly sure if you give the PyPA that suggestion, they will just deflate at the thought of the workload. Besides, we already offer private repos for free, several ways ranging from devpi to python -m SimpleHTTPServer in a specially created directory.
From: Python-ideas <python-ideas-bounces+tritium-list=sdamon.com(a)python.org> On Behalf Of Nick Humrich
Sent: Wednesday, April 4, 2018 12:26 PM
Subject: [Python-ideas] Pypi private repo's
I am sure this has been discussed before, and this might not even be the best place for this discussion, but I just wanted to make sure this has been thought about.
What if pypi.org <http://pypi.org> supported private repos at a cost, similar to npm?
This would be able to help support the cost of pypi, and hopefully make it better/more reliable, thus in turn improving the python community.
If this discussion should happen somewhere else, let me know.
Today, LWN published my new article "A new package index for Python".
https://lwn.net/Articles/751458/ In it, I discuss security, policy, UX
and developer experience changes in the 15+ years since PyPI's founding,
new features (and deprecated old features) in Warehouse, and future
plans. Plus: screenshots!
If you aren't already an LWN subscriber, you can use this subscriber
link for the next week to read the article despite the LWN paywall.
This summary should help occasional Python programmers -- and frequent
Pythonists who don't follow packaging/distro discussions closely --
understand why a new application is necessary, what's new, what features
are going away, and what to expect in the near future. I also hope it
catches the attention of downstreams that ought to migrate.
Warehouse project manager
The recent discussion of communication workflows prompted me to
investigate exactly what would be involved in migrating the list to
Mailman 3 (with its native web gateway and other account management
improvements), and I'm happy to report that there don't appear to be
any blockers to our initiating the migration.
I'm currently planning to send the migration request to
postmaster(a)python.org later this week (probably Thursday evening), but
don't have an ETA for how long the actual migration may take after
that (as I believe distutils-sig will be one of the larger python.org
list migrations undertaken so far).
Accordingly, this is an initial heads up as to exactly what the change
1. If you only ever access the mailing list via email, and never
tinker with your subscription settings or look up threads in the list
archives, then you shouldn't notice any real changes other than some
of the headers on mails from the list changing a bit (such as the new
Archived-At header appearing on each message).
2. If you do use the website to change your subscription settings or
look up threads in the list archives, or have wished you had a more
web-forum-like interface for accessing the list, then the rest of this
email is likely to be of interest :)
== Changes to subscription management (and list moderation) ==
Mailman 3 relies on a more conventional user account management model
than the historically list-centric model in Mailman 2. This means that
either before or after the migration, folks that want to modify their
subscription settings will need to go to https://mail.python.org/mm3/
and register for an account.
If you use GitLab, GitHub, Google, or Facebook, then you can go
straight to https://mail.python.org/mm3/accounts/login/, select one of
those options, and grant the required access to look up your email
address. (At least for GitHub, the request will come from Mark
Sapiro's developer key, and I believe that's the case for the other
services as well)
If your address on the linked service matches your subscription
address, then you're done. If it doesn't match, then you can head to
https://mail.python.org/mm3/accounts/email/ to register more addresses
(and hence link any related subscriptions to your account).
If you don't use any of those services, or simply don't want to use
them with mail.python.org, then head to
https://mail.python.org/mm3/accounts/signup/ to create a conventional
username-and-password based account.
Regardless of how you sign up, the primary authentication mechanism is
access to the relevant email address - the old MM2 plain text email
password isn't used at all.
After the migration, the current
https://mail.python.org/mailman/listinfo/distutils-sig URL will
automatically redirect to
For a working example of what that will look like, see
== Changes to list archiving (and the native web gateway) ==
After the migration, the current list archive at
https://mail.python.org/pipermail/distutils-sig/ will remain in place
in order to preserve existing links, but will no longer be updated
with new messages. That page will also be updated with a link to the
new archiver/web gateway page.
That page will be at
and not only features stable and predictable URLs for each post, but
also includes a native web gateway, allowing folks to both create new
threads and reply to existing ones using the web page, without needing
to explicitly subscribe to the list first.
Again, core-workflow provides an example of what that will look like
in practice, with the new archive at
and the post-migration legacy archive at
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
In case you're planning your PyCon Cleveland travel: we are planning to
hold a Warehouse/packaging sprint at PyCon (the sprints are Monday, May
14th - Thursday, May 17th 2018).
We welcome package maintainers, backend and frontend web developers,
infrastructure administrators, technical writers, and testers to help us
make the new PyPI, and the packaging ecosystem more generally, as usable
and robust as possible. I took the liberty of updating
https://us.pycon.org/2018/community/sprints/ to say so.
Once we're closer to the sprints I'll work on a more detailed list of
things we'll work on in Cleveland.
Currently in the packaging space, we have a number of avenues for communication, which are:
- Other project specific mailing lists
- Various issue trackers spread across multiple platforms.
- Probably more places I’m not remembering.
The result of this is that all discussion ends up being super fractured amongst the various places. Sometimes that is exactly what you want (for instance, someone who is working on the wheel specs probably doesn’t care about deprecation policy and internal module renaming in pip) and sometimes that ends up being the opposite of what you want (for instance, when you’re describing something that touches PyPI, setuptools, flit, pip, etc all at once).
Theoretically the idea is that distutils-sig is where cross project reaching stuff goes, IRC/gitter is where real time discussion goes, and the various project mailing lists and issue trackers are where the project specific bits go. The problem is that often times doesn’t actually happen in practice except for the largest and most obvious of changes.
I think our current “communications stack” kind of sucks, and I’d love to figure out a better way for us to handle this that solves the sort of weird “independent but related” set of projects we have here.
From my POV, a list of our major problems are:
* Discussion gets fractured across a variety of platforms and locations, which can make it difficult to actually keep up with what’s going on but also to know how to loop in someone relevant if their input would be valuable. You have to more or less simultaneously know someone’s email, Github username, IRC nick, bitbucket username, etc to be able to bring threads of discussion to people’s attention.
* It’s not always clear to users where a discussion should go, often times they’ll come to one location and need to get redirected to another location. If any discussion did happen in the incorrect location, it tends to need to get restarted in the new location (and depending on the specific platform, it may be impossible to actually redirect everyone over to the proper location, so you again, end up fractured with the discussion happening in two places).
* A lot of the technology in this stack is particularly old, and lacks a lot of the modern day affordances that newer things have. An example is being able to edit a discussion post to fix typos that can hinder the ability of others to actually understand whats being talked about. In your typical mailing list or IRC there’s no mechanism by which you can edit an already sent message, so your only option is to either let the problem ride and hope it doesn’t trip up too many people, or send an additional message to correct the error. However these show up as additional, later messages which someone might not even see until they’ve already been thoroughly confused by the first message (since people tend to read email/IRC in a linear fashion).
- There is a lot of things in this one, other things are things like being able to control in a more fine grained manner what email you’re going to get.
- Holy crap, formatting and typography to make things actually readable and not a big block of plaintext.
* We don’t have a clear way for users to get help, leaving users to treat random issues, discussion areas, etc as a support forum, rather than some place that’s actually optimized for that. Some of this ties back into some of the earlier things too, where it’s hard to actually redirect discussions
These aren’t *new* problems, and often times the existing members of a community are the least effected becasue they’ve already spent effort learning the ins and outs and also curating a (typically custom) workflow that they’ve grown accustomed too. The problem with that is that often times that means that new users are left out, and the community gets smaller and smaller as time goes on as people leave and aren’t replaced with new blood, because they’re driven off but the issues with the stack.
A big part of the place this is coming from, is me sitting back and realizing that I tend to be drawn towards pulling discussions into Github issues rather than onto the varying mailing lists, not because that’s always the most appropriate place for it, but because it’s the least painful place in terms of features and functionality. I figure if I’m doing that, when I already have a significant investment in setting up tooling and being involved here, that others (and particularly new users) are likely feeling the same way.
I apologize if this email spams you, but this is the only email address I
can find from http:// <goog_582715680>pypa.io. And we are in an situation
that need your help.
We have a former employee who created user/project on pypi named as p*inflow,
*which is inactive for 2+ years.
And now we want to reuse this name to open source a project. We know the
email access of this user, but both password for this email and pypi
account are lost during the transition period.
Is there any way to recover/reset the account's password so that we can
update new package?
Thanks in advance and this will give us huge value.
(please little r to me if you can help and I can share the email address of
Installing numpy with setupegg.py does indeed install numpy as an egg,
but it does not solve the problem I describe here (tested with
numpy-1.3.0rc1, and the rest as described below). If numpy is listed
as a dependency for another package which is installed into a virtual
environment, setuptools tries to download and install numpy instead
of just adjusting the easy_install.pth file.
Hmm. I tried to install my packages without a virtual environment, and
that works ok. So it must be a problem in the interaction between
setuptools and virtualenv... I'll cross post this in the virtualenv
----- "David Cournapeau" <david(a)ar.media.kyoto-u.ac.jp> wrote:
> Christian Marquardt wrote:
> > Hello,
> > I've run into the following problem:
> > - I have numpy (v1.3.0b1) successfully installed in my system's
> > site-packages (for Python 2.5.2).
> > - I have setup a virtual environment (with virtualenv 1.3.3) which
> > uses the systems site-packages; in particular, I can load numpy
> > when working in that environment.
> > - I have a (namespaced) package which requires numpy (via
> > "install_requires = ['numpy']" in the setup() of setuptools
> > (v0.6c9, as coming with virtualenv), and try to build that in
> > the virtual environment using the usual python setup.py install.
> > - When processing the requirements for the packages, setuptools
> > downloads a fresh copy from numpy 1.3.0b1 from Pypi and tries
> > to install it - although numpy is already available in the
> > wide site-packages directory.
> > To me, that's a bug, but I would like to know if it is a bug in
> > (which might not support setuptools in the correct way) or
> > which tries to install an already existing package.
> If you want setuptools 'support', I think you should install numpy
> through the setupegg.py script.
Dr. Christian Marquardt Email: christian(a)marquardt.sc
Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776
64293 Darmstadt Mobile: +49 (0) 179 290 84 74
Germany Fax: +49 (0) 6151 95 13 885
Is it currently possible to upgrade dependencies as well when
upgrading a packages? If not, this would be a really nice feature to
add to easy_install. Maybe a call like:
easy_install --upgrade --upgrade-deps Package
Obviously it makes sense to leave this off by default.
Elvelind Grandin reported a problem with the "develop" command that turned
out to be a flaw in its --find-links support. Specifically, it wasn't ever
processing the links. :)
In the process of fixing it, I wound up cleaning up an annoying (to me, at
least) quirk of the previous workings of --find-links. It used to be that
find-links would always be processed first, no matter what, even if you
were doing a completely local operation. This would've been especially
annoying if it carried over into "develop", so I made some changes.
Now, if an item passed to --find-links is local (a filename or file: URL),
or a direct link to an egg or other distribution, it is indexed
immediately. Remote URLs are now only retrieved if a dependency can't be
resolved locally, or if you use the -U or --upgrade options (this goes for
Note that this is a behavior change for easy_install, which was effectively
treating --find-links as though you'd specified --upgrade in certain
cases. So, if you're used to getting upgrades downloaded as a result of
using --find-links, please note that this will no longer
happen. EasyInstall will now *only* go online if a dependency can't be
resolved locally, if -U or --upgrade is used, or if you provided suitable
direct URLs via an argument or --find-links, or via a link in a local .html