Richard Jones wrote:
I think that the download_url and download-mirrors approaches are quite independant. I guess though that download_url would specify a directory, not a file. That would then fit in with the above scheme. Then users wouldn't necessarily have to upload their package to the network-of-mirrors.
The download mirrors idea is fine, but it may not always be what package authors want to use, e.g. companies selling Python products may want to have complete control over the sites where you can download their code. The same is true for touchy things like crypto code.
Yep, hence my comment that the download_url and download-mirrors approaches are quite independant.
Even with the mirror idea you still need to tell the system which files have been uploaded and for which platforms these are intended (note that not all package authors use the default distutils naming scheme for files).
I think this could be worked around by having the file submission mechanism force the filename to follow a specific pattern.
You can do that for the mirror type approach, but certainly not for the download_spec_url approach. While distutils already goes a long way in trying to add enough information to the filename to be able to identify the right one to download, this is sometimes not enough, e.g. RPM names do not include the Python version per default, or you may want to add a OS specific tag to the name (like "suse_8.rpm").
The specification file would make this information machine readable and thus enable a download mechanism to choose the right version for download.
The idea with a central download file specification would solve all of these problems by being explicit about the names and locations. You only need one URL for this file and that's why I proposed to use the download_url for this.
My argument is that the download file spec system would be quite complex for the end user to work with. Well, at least I can't think of a reasonable setup
- but perhaps you have :)
Not at all: by letting the distutils command read the spec file, it can make most if not all decisions automatically.
In the end, the end user will just issue:
python pypi.py install pyxml
and the system will do the rest (talk to the PyPI server, fetch the download spec, identify the right download URL, find a usable mirror, download the prebuilt binary distribution and install it).
Perhaps it needs to be renamed to 'download_spec_url'.
Hurm - we could just accept that a download url with a specific suffix is a spec (eg. .pkgspec)? We could go as far as say that if it's an XML file (ie. .xml), then it's a download spec. I'm pre-supposing XML, of course, when the INI format would probably be enough. Again, I think I'd like to see some more flesh on your proposal (especially the bits about making it as simple as possible for the package maintainer) before I jump on the band-wagon :)
Have a look at the ActiveState PPD format (used by their PPM system to find the right download files):
""" <SOFTPKG NAME="egenix-mx-base" VERSION="2,1,0"> <TITLE>eGenix mx Extensions for Python - Base Distribution</TITLE> <ABSTRACT>The eGenix mx Extension Series are a collection of Python extensions written in ANSI C and Python which provide a large spectrum of useful additions to everyday Python programming.
The Base Distribution includes the Open Source subpackages of the series and is needed by all other add-on packages of the series.
This software is brought to you by eGenix.com and distributed under the eGenix.com Public License. </ABSTRACT> <AUTHOR>Marc-Andre Lemburg (firstname.lastname@example.org)</AUTHOR> <IMPLEMENTATION> <PYTHONCORE VERSION="2.1.3" /> <OS VALUE="linux-i686" /> <ARCHITECTURE VALUE="i686-linux-thread-multi" /> <CODEBASE HREF="http://www.egenix.com/files/python/egenix-mx-base-2.1.0.linux-i686.gztar" /> </IMPLEMENTATION> </SOFTPKG> """
I think we can build on that.