Hi, after being invited by Greg Ward to this list, I tried to read all documents available concerning the topic of the list and I also tried to get uptodate with the mailing list by reading the archives. Now I like to give my $0.02 as someone who tries to keep a quite big distribution uptodate which also includes a lot of third party modules. This mail is rather long, cause I like to comment all all things in one mail, cause I think that quite a lot relates to each other. And please keep in mind that I am speaking as an enduser in one or another way. That means I will have to use this as a developer and will have to use it as a packager of third party developer. So I won't comment on any implementation issue where it is not absolutely critical. And please keep also in mind that may be I am talking about things you have already talked about, but I am quite new to this list and I only read the archives of this and the last month. ;-) - Tasks and division of labour I think that the packager and installer role are a little bit mixed up. In my eyes the packagers job is to build and test the package on a specific platform and also build the binary distribution. This is also what you wrote on the webpage. But the installers role should only be to install the prebuilt package, cause his normal job is to provide uptodate software that the users of the system he manages use. And he has enough to do with that. - The proposed Interface Basically the that I can say that I like the idea that I can write in my RPM spec file setup.py build setup.py test setup.py install and afterwards I have installed the package somewhere on my machine. And I am absolutely sure that it works as indented. I think that this is the way it works for most Perl modules. But I have problems with bdist option of setup.py cause I think that this is hard to implement. If I got this right I as a RPM and debian package maintainer should be able to say setup.py bdist rpm setup.py bdist debian And afterwards I have a debian linux and rpm package of the Python package. Nice in theory but this would require that setup.py or the distutils packages how to create these packages, that means we have to implement a meta packaging system on top of existing packaging systems which are powerful themselves. So what would it look like when I call these commands above? Would the distutils stuff create a spec file (input file to create a rpm) and then call rpm -ba <specfile>? And inside the rpm build process setup.py is called again to compile and install the packages content? Finally rpm creates the two normal output files, which are the actual binary package and the other is the source rpm from which you can recompile the binary package on your machine. This is the same for debian linux, slackaware linux, rpm based linux versions, Solaris packages and BeOS software packages. The last is only a vague guess cause I only looked into the Be system very shortly. - What I would suggest what setup.py should do The options that I have no problem with are build_py - copy/compile .py files (pure Python modules) build_ext - compile .c files, link to .so in blib build_doc - process documentation (targets: html, info, man, ...?) build - build_py, build_ext, build_doc dist - create source distribution test - run test suite install - install on local machine What should make_blib do ? But I require is that I can tell the build_ext which compiler switches to use, cause may be I need on my system different switches then the original developer can use. I also like to provide the install option with an argument to tell where the files should be installed, cause I can tell rpm for example that it should compile the extension package as if it would be installed in /usr/lib/python1.5 but could it in the install stage to install it in /tmp/py-root/usr/lib/python1.5. So I can build and install the package without overwriting an existing installation of a older version and I also have a clean way to determine what files actually got installed. install should also be split up into install and install_doc and installdoc should also be able to take an argument where I tell it where to install the files to. I would remove the bdist option cause it would introduce a lot of work, cause you not only have to tackle various systems but also various packaging systems. I would add an option files instead which returns a list of files this packages consists of. And consequently an option doc_files is also required cause I like to stick to the way rpm manages doc files, I simply tell it what files are doc files and it installs them the right way. Another that would be fine if I could extract the package information with setup.py. Something like setup description returns the full description and so on. And I would also add an option system to the command line options, cause I like to tell the setup.py script an option from which it can determine on which system it is running. Why this is required will follow. - ARCH dependent sections should be added What is not clear in my eyes, may be I have missed something, but how do you deal with different architectures? What I would suggest here is we should use a dictionary instead of plain definitions of cc, ccshared, cflags and ldflags. Such a dictionary may look like that compilation_flags = { "Linux" : { "cc": "gcc", "cflags": "-O3", ...}, "Linux2.2": { "cc": "egcs", ....}, "Solaris": { "cc": "cc", ....} } And now I would call setup.py like that setup.py -system Linux build or whatever convention you want to use for command line arguments. - Subpackages are also required Well, this is something that I like very much and what I really got accustomed to. They you build PIL and also a Tkinter version that supports PIL, then like to create both packages and also state that PIL-Tkinter requires PIL. Conclusion (or whatever you want to call it) I as a packager don't require the distutils stuff to be some kind of meta packaging system that generates from some kind of meta information the actual package creation file from which it is called again. And I don't believe that have to develop a complete new packaging system, cause for quite a lot systems such systems exist. And I also think that we introduce such a system the acceptance wouldn't be very high. The people want to maintain the software basis with their natural tools. A RedHat Linux user would like to use rpm, a Solaris user would like to use pkg and a WindowsUser would like to use INstallShield (or whatever is the standard). The target of distutils should be to develop a package which can be configured to compile and install the extension package. The developed software should be usable by the packager to extract all required information to create his native package and the installer, should use the prebuilt packages at best or should be able to install the package by calling setup install. I hope that I described as good as possible what I require as packager and I think that is not a business of distutils but of the native packaging system. Any comments are welcome and I am willing to discuss this, as I am absolutely aware that we need a standard way of installing python extensions. Best regards, Oliver -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/
Hi Oliver -- glad you could join us -- you raise a lot of good points, and I'll see if I can address (most of) them. Your post will certainly serve as a good "to do" list! Quoth Oliver Andrich, on 02 February 1999:
- Tasks and division of labour
I think that the packager and installer role are a little bit mixed up. In my eyes the packagers job is to build and test the package on a specific platform and also build the binary distribution. This is also what you wrote on the webpage.
But the installers role should only be to install the prebuilt package, cause his normal job is to provide uptodate software that the users of the system he manages use. And he has enough to do with that.
Yes, the packager and installer are a little bit mixed up, because in the most general case -- installing from a source distribution -- there is no packager, and the installer has to do what that non-existent packager might have done. That is, starting with the same source distribution, both a packager (creating a built distribution) and an installer (of the hardcore Unix sysadmin type, who is not afraid of source) would incant: # packager: # installer: tar -xzf foo-1.23.tar.gz tar -xzf foo-1.23.tar.gz cd foo-1.23 cd foo-1.23 ./setup.py build ./setup.py build ./setup.py test ./setup.py test ./setup.py bdist --rpm ./setup.py install Yes, there is a lot of overlap there. So why is the packager wasting his time building this RPM (or some other kind of built distribution)? Because *this* installer is an oddball running a six-year-old version of Freaknix on his ancient Frobnabulator-2000, and Distutils doesn't support Freaknix' weird packaging system. (Sorry.) So, like every good Unix weenie, he starts with the source code and installs that. More mainstream users, eg. somebody running a stock Red Hat 5.2 on their home PC (maybe even with -- gasp! -- Red Hat's Python RPM, instead of a replacement cooked up by some disgruntled Python hacker >grin<), will just download the foo-1.23.i386.rpm that results from the packager running "./setup.py bdist --rpm", and incant rpm --install foo-1.23.i386.rpm Easier and faster than building from source, but a) it only works on Intel machines running an RPM-based Linux distribution, and b) it requires that some kind soul out there has built an RPM for this particular Python module distribution. That's why building from source must be supported, and is considered the "general case" (even if not many people will have to do it). Also, building from source builds character (as you will quickly find out if you ask "Where can I get pre-built binaries for Perl?" on comp.lang.perl.misc ;-). It's good for your soul, increases karma, reduces risk of cancer (but not ulcers!), etc.
- The proposed Interface
Basically the that I can say that I like the idea that I can write in my RPM spec file
setup.py build setup.py test setup.py install
and afterwards I have installed the package somewhere on my machine. And I am absolutely sure that it works as indented. I think that this is the way it works for most Perl modules.
That's the plan: a simple standard procedure so that anyone with a clue can do this, and so that it can be automated for those without a clue. And yes, the basic mode of operation was stolen shamelessly from the Perl world, with the need for Makefiles removed because a) they're not really needed, and b) they hurt portability.
But I have problems with bdist option of setup.py cause I think that this is hard to implement. If I got this right I as a RPM and debian package maintainer should be able to say
setup.py bdist rpm setup.py bdist debian
And afterwards I have a debian linux and rpm package of the Python package.
That's the basic idea, except it would probably be "bdist --rpm" -- 'bdist' being the command, '--rpm' being an option to it. If it turns out that all the "smart packagers" are sufficiently different and difficult to wrap, it might make sense to make separate commands for them, eg. "bdist_rpm", "bdist_debian", "bdist_wise", etc. Or something like that.
Nice in theory but this would require that setup.py or the distutils packages how to create these packages, that means we have to implement a meta packaging system on top of existing packaging systems which are powerful themselves. So what would it look like when I call these commands above?
Would the distutils stuff create a spec file (input file to create a rpm) and then call rpm -ba <specfile>? And inside the rpm build process setup.py is called again to compile and install the packages content? Finally rpm creates the two normal output files, which are the actual binary package and the other is the source rpm from which you can recompile the binary package on your machine.
I haven't yet thought through how this should go, but your plan sounds pretty good. Awkward having setup.py call rpm, which then calls setup.py to build and install the modules, but consider that setup.py is really just a portal to various Distutils classes. In reality, we're using the Distutils "bdist" command to call rpm, which then calls the Distutils "build", "test", and "install" commands. It's not so clunky if you think about it that way. Also, I don't see why this constitutes "building a meta packaging system" -- about the only RPM-ish terrain that Distutils would intrude upon is knowing which files to install. And it's got to know that anyways, else how could it install them? Heck, all we're doing here is writing a glorified Makefile in Python because Python has better control constructs and is more portable than make's language. Even the lowliest Makefile with an "install" target has to know what files to install.
This is the same for debian linux, slackaware linux, rpm based linux versions, Solaris packages and BeOS software packages. The last is only a vague guess cause I only looked into the Be system very shortly.
The open question here is, How much duplication is there across various packaging systems? Definitely we should concentrate on the build/test/dist/install stuff first; giving the world a standard way to build module distributions from source would be a major first step, and we can worry about built distributions afterwards.
- What I would suggest what setup.py should do
The options that I have no problem with are
build_py - copy/compile .py files (pure Python modules) build_ext - compile .c files, link to .so in blib build_doc - process documentation (targets: html, info, man, ...?) build - build_py, build_ext, build_doc dist - create source distribution test - run test suite install - install on local machine
What should make_blib do ?
"make_blib" just creates a bunch of empty directories that mimic something under the Python lib directory, eg. ./blib ./blib/site-packages ./blib/site-packages/plat-sunos5 ./blib/doc ./blib/doc/html etc. (The plat directory under site-packages is, I think, something not in Python 1.5 -- but has Michel Sanner pointed out, it appears to be needed.) The reason for this: it provides a mockup installation tree in which to run test suites, it makes installation near-trivial, and it makes determining which files get installed where near-trivial. The reason for making it a separate command: because build_py, build_ext, build_doc, and build all depend on it having already been done, so it's easier if they can just "call" this command themselves (which will of course silently do nothing if it doesn't need to do anything).
But I require is that I can tell the build_ext which compiler switches to use, cause may be I need on my system different switches then the original developer can use.
Actually, the preferred compiler/flags will come not from the module developer but from the Python installation which is being used. That's crucial; otherwise the shared library files might be incompatible with the Python binary. If you as packager or installer wish to tweak some of these ("I know this extension module is time-intensive, so I'll compile with -O2 instead of -O"), that's fine. Of course, that opens up some unpleasant possibilities: "My sysadmin compiled Python with cc, but I prefer gcc, so I'll use it for this extension." Danger Will Robinson! Danger! Not much we can do about that except warn in the documentation, I suppose.
I also like to provide the install option with an argument to tell where the files should be installed, cause I can tell rpm for example that it should compile the extension package as if it would be installed in /usr/lib/python1.5 but could it in the install stage to install it in /tmp/py-root/usr/lib/python1.5. So I can build and install the package without overwriting an existing installation of a older version and I also have a clean way to determine what files actually got installed.
Yes, good idea. That should be an option to the "install" command; again, the default would come from the current Python installation, but could be overidden by the packager or installer.
install should also be split up into install and install_doc and installdoc should also be able to take an argument where I tell it where to install the files to.
Another good idea. Actually, I think the split should be into "install python library stuff" and "install doc"; "install" would do both. I *don't* think that "install" should be split, like "build", into "install_py" and "install_ext". But I could be wrong... opinions?
I would remove the bdist option cause it would introduce a lot of work, cause you not only have to tackle various systems but also various packaging systems. I would add an option files instead which returns a list of files this packages consists of. And consequently an option doc_files is also required cause I like to stick to the way rpm manages doc files, I simply tell it what files are doc files and it installs them the right way.
Punting on bdist: ok. Removing it? no way. It should definitely be handled, although it's not as high priority as being able to build from source. (Because obviously, if you can't build from source, you can't make a built distribution!) The option(s) to get out list(s) of files installed is good. Where does it belong, though? I would think something like "install --listonly" would do the trick.
Another that would be fine if I could extract the package information with setup.py. Something like setup description returns the full description and so on.
I like that -- probably best to just add one command, say "meta". Then you could say "./setup.py meta --description" or "./setup.py meta --name --version". Or whatever.
And I would also add an option system to the command line options, cause I like to tell the setup.py script an option from which it can determine on which system it is running. Why this is required will follow.
Already in the plan. Go back and check the archives for mid-January -- I posted a bunch of stuff about design proposals, with how-to-handle- command-line-options being one of my fuzzier areas. (Eg. see http://www.python.org/pipermail/distutils-sig/1999-January/000124.html and followups.)
- ARCH dependent sections should be added
What is not clear in my eyes, may be I have missed something, but how do you deal with different architectures? What I would suggest here is we should use a dictionary instead of plain definitions of cc, ccshared, cflags and ldflags. Such a dictionary may look like that
Generally, that's left up to Python itself. Distutils shouldn't have a catalogue of compilers and compiler flags, because those are chosen when Python is configured and built. That's the autoconf philosophy -- no feature catalogues, just make sure that what you try makes sense on the current platform, and let the builder (of Python in this case, not necessarily of a module distribution) override if he needs to. Module packagers and installers can tweak compiler stuff a little bit, but it's dangerous -- the more you tweak, the more likely you are to generate shared libraries that won't load with your Python binary.
- Subpackages are also required
Well, this is something that I like very much and what I really got accustomed to. They you build PIL and also a Tkinter version that supports PIL, then like to create both packages and also state that PIL-Tkinter requires PIL.
Something like that has been tried in the Perl world, except they were talking about "super-packages" and they called them "bundles". I think it was a hack because module dependencies were not completely handled for a long time (which, I gather, has now been fixed). I never liked the idea, and I hope it will now go away. The plan for Distutils is to handle module dependencies from the start, because that lack caused many different Perl module developers to have to write Makefile.PL's that all check for their dependencies. That should be handled by MakeMaker (in the Perl world) and by Distutils (in the Python world). Thanks for your comments! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
On Thu, Feb 04, 1999 at 10:37:29AM -0500, Greg Ward wrote:
Yes, there is a lot of overlap there. So why is the packager wasting his time building this RPM (or some other kind of built distribution)? Because *this* installer is an oddball running a six-year-old version of Freaknix on his ancient Frobnabulator-2000, and Distutils doesn't support Freaknix' weird packaging system. (Sorry.) So, like every good Unix weenie, he starts with the source code and installs that. [...]
Ok, I see the reason for this definition and I can absolutely agree on it. I myself was such an oddball myself until I had to manage a software state for quite a lot of machines. ;-)))
Also, building from source builds character (as you will quickly find out if you ask "Where can I get pre-built binaries for Perl?" on comp.lang.perl.misc ;-). It's good for your soul, increases karma, reduces risk of cancer (but not ulcers!), etc.
;-)))
That's the basic idea, except it would probably be "bdist --rpm" -- 'bdist' being the command, '--rpm' being an option to it. If it turns out that all the "smart packagers" are sufficiently different and difficult to wrap, it might make sense to make separate commands for them, eg. "bdist_rpm", "bdist_debian", "bdist_wise", etc. Or something like that.
I don't think that this is the right way if you want to do this. I think that setup.py should behave quite pythonish in this situation, that means it should be called as you defined it cause this describes it or models it in a better way. [...]
I haven't yet thought through how this should go, but your plan sounds pretty good. Awkward having setup.py call rpm, which then calls setup.py to build and install the modules, but consider that setup.py is really just a portal to various Distutils classes. In reality, we're using the Distutils "bdist" command to call rpm, which then calls the Distutils "build", "test", and "install" commands. It's not so clunky if you think about it that way.
Also, I don't see why this constitutes "building a meta packaging system" -- about the only RPM-ish terrain that Distutils would intrude upon is knowing which files to install. And it's got to know that anyways, else how could it install them? Heck, all we're doing here is writing a glorified Makefile in Python because Python has better control constructs and is more portable than make's language. Even the lowliest Makefile with an "install" target has to know what files to install.
Hm... I am not quite sure what position I should take here. The position of the developer or the packager. But let's discuss this. ;-)) Let's assume we implement bdist --rpm and so on. What is the job of the developer and the packager? The developer has to provide the setup.py and with it all the meta information he thinks that is useful for building and installing his extension, but also for building the package as a binary distribution. Let us now look at the process of packing an extension module. And let's us keep outside that the extension should be compiled on half a dozen platforms. What does actually happen when the packager builds the distribution, let's take PIL as an example. The packager (me) calls setup.py bdist --rpm and these what goes wrong and tweacks such things as wrong include paths, wrong library names, evil compilation command switches and so on. Afterwards building the distribution might look like that. setup.py build --include-path="/usr/include/mypath /usr/local/include/tk/" \ --libraries="tcl tclX mytk" \ --cflags="$RPM_OPT_FLAGS -mpentiumpro" setup.py install --install-dir="/tmp/pyroot/usr" setup.py bdist --install-dir="/tmp/pyroot/usr" --rpm And this contradicts the actual rpm bnuilding process, cause rpm what's to bne able to build the package itself. Or I have to edited the setup.py file to do my changes, cause normally the building process for a rpm looks like that. Step 1) create a rpm spec file Step 2) call rpm -ba <spec file> Step 2.1) unpack sources Step 2.2) compile sources Step 2.3) install binaries Step 2.4) package the files Step 3) install the package I have to edited setup.py, then we have the same problems as we have with Makefiles. Another problem that arises by letting setup.py or distutils be able to create an RPM itself without editing the spec file myself, is how about dependencies? How can the developer know anything about the packages that are required on my system or on my version of my system? How can he know for example that my packages require Tkstep instead of Tk, or that my PIL package requires libjpeg-6b and not just the package jpeg-6b? I don't think that building the actual package should be the job of disutils or so, cause it introduces much work I as a developer don't want to take care of, cause I don't care how the package system of distribution x of Linux works and how Sun changes it package system in the next version. What I as a developer want to do is, to provide a way so that my extension compiles and installs in the right way on all my target platforms and that I am able to add new platforms from user information. I as packager don't like to learn how to edited a new type of Makefile, which in some way or so must also introduce a new meta level wrapping my actual packing system that I am very accustomed too. It is for me much easier to start out with some kind of summy RPM spec file or so, that has all basic setup.py calls already included and tweak the RPM options in the RPM file and not in some kind of new file. Hoepfully I don't annoy you or tell you something you already have looked at, and decided that it is a minor problem. But let's look at late 1999, distutils stuff has been release and all the world is using it, but guess what would happen in my eyes. Most of the people would use the features a installer has and should use, but stick to their traditional way of packing the binary distribution. I mean all package system without an actual config file driven method can be build from with distutils, but all the other will encounter problems and will be forced to deal with stuff that they don't like to deal with.
This is the same for debian linux, slackaware linux, rpm based linux versions, Solaris packages and BeOS software packages. The last is only a vague guess cause I only looked into the Be system very shortly.
The open question here is, How much duplication is there across various packaging systems? Definitely we should concentrate on the build/test/dist/install stuff first; giving the world a standard way to build module distributions from source would be a major first step, and we can worry about built distributions afterwards.
Yes, that is definetly the case. If I would be able to easily build an extension without reading a README each and every time and without having to deal with configuration parameters that differ each time, then this would help me a lot already.
"make_blib" just creates a bunch of empty directories that mimic something under the Python lib directory, eg. [...]
I see, this is definetly a useful command for setup.py.
install should also be split up into install and install_doc and installdoc should also be able to take an argument where I tell it where to install the files to.
Another good idea. Actually, I think the split should be into "install python library stuff" and "install doc"; "install" would do both. I *don't* think that "install" should be split, like "build", into "install_py" and "install_ext". But I could be wrong... opinions?
This is fine for me.
I would remove the bdist option cause it would introduce a lot of work, cause you not only have to tackle various systems but also various packaging systems. I would add an option files instead which returns a list of files this packages consists of. And consequently an option doc_files is also required cause I like to stick to the way rpm manages doc files, I simply tell it what files are doc files and it installs them the right way.
Punting on bdist: ok. Removing it? no way. It should definitely be handled, although it's not as high priority as being able to build from source. (Because obviously, if you can't build from source, you can't make a built distribution!)
Ok, but I keep my opinion on the all in one solution. ;-)) But I have to think about it.
The option(s) to get out list(s) of files installed is good. Where does it belong, though? I would think something like "install --listonly" would do the trick.
Yep this is fine.
Another that would be fine if I could extract the package information with setup.py. Something like setup description returns the full description and so on.
I like that -- probably best to just add one command, say "meta". Then you could say "./setup.py meta --description" or "./setup.py meta --name --version". Or whatever.
What I like to see is setup meta --name --version --short-description --description
Already in the plan. Go back and check the archives for mid-January -- I posted a bunch of stuff about design proposals, with how-to-handle- command-line-options being one of my fuzzier areas. (Eg. see http://www.python.org/pipermail/distutils-sig/1999-January/000124.html and followups.)
Ok, I look into this.
- ARCH dependent sections should be added
What is not clear in my eyes, may be I have missed something, but how do you deal with different architectures? What I would suggest here is we should use a dictionary instead of plain definitions of cc, ccshared, cflags and ldflags. Such a dictionary may look like that
Generally, that's left up to Python itself. Distutils shouldn't have a catalogue of compilers and compiler flags, because those are chosen when Python is configured and built. That's the autoconf philosophy -- no feature catalogues, just make sure that what you try makes sense on the current platform, and let the builder (of Python in this case, not necessarily of a module distribution) override if he needs to. Module packagers and installers can tweak compiler stuff a little bit, but it's dangerous -- the more you tweak, the more likely you are to generate shared libraries that won't load with your Python binary.
Ok, this is fine.
The plan for Distutils is to handle module dependencies from the start, because that lack caused many different Perl module developers to have to write Makefile.PL's that all check for their dependencies. That should be handled by MakeMaker (in the Perl world) and by Distutils (in the Python world).
This is fine to hear. Is it also planned that I can check for a certain version of library or so? Let's say I need libtk.so.8.1.0 and not libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement this tests? Bye, Oliver -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/
Quoth Oliver Andrich, on 04 February 1999:
Hm... I am not quite sure what position I should take here. The position of the developer or the packager. But let's discuss this. ;-))
There are lots of people developing Python modules, but not too many packaging them up -- so your opinions as a packager as especially valuable! (So I can't understand why you-as-developer advocate pushing work onto the packager. >grin<)
What does actually happen when the packager builds the distribution, let's take PIL as an example. The packager (me) calls
setup.py bdist --rpm
and these what goes wrong and tweacks such things as wrong include paths, wrong library names, evil compilation command switches and so on. Afterwards building the distribution might look like that.
setup.py build --include-path="/usr/include/mypath /usr/local/include/tk/" \ --libraries="tcl tclX mytk" \ --cflags="$RPM_OPT_FLAGS -mpentiumpro" setup.py install --install-dir="/tmp/pyroot/usr" setup.py bdist --install-dir="/tmp/pyroot/usr" --rpm
And this contradicts the actual rpm bnuilding process, cause rpm what's to bne able to build the package itself.
[...Greg gets a sinking feeling...] Urp, you're quite right. This business of building RPMs is a bit hairier than I thought. Here's a *possible* solution to this problem: have setup.py (well, the Distutils modules really) read/write an options file that reflects the state of all options for all commands. (Credit for this idea goes to Martijn Faassen -- though I don't think Martijn said the Distutils should also *write* the options file to reflect user-supplied command-line options.) The options file would basically be a repository for command-line options: you could set it up before you run setup.py, and when you run setup.py with options, they get stuffed in the options file. Then, the "bdist --rpm" code uses the options file to create the spec file. This raises the danger of "yet another Makefile-that's-not-a-Makefile to edit" though. Yuck. Maybe we could call it "Setup" so existing Python hackers think they know what it is. ;-) However, you raise a lot of difficult issues regarding creating RPMs. I *still* think we should be able to handle this, but it's looking harder and harder -- and getting pushed farther and farther onto the back-burner. And this is only one of the "smart packagers" out there; we definitely need some expertise with Solaris 'pkg', the various Mac and Windows solutions, etc. I'll leave it at that for now: I'm not going to respond in depth to all of your concerns, because for the most part I don't have answers. I will spend some head-scratching time on them, though. Oh: implementing a "bdist" command to create "dumb" built distributions (eg. a tar.gz or zip file) should still be pretty easy, though -- just tar/zip up the blib directory (minus documentation: hard to say where that should be installed, since there's no standard for module documentation). So *that* doesn't have to be punted on.
What I like to see is
setup meta --name --version --short-description --description
Sounds about right to me -- we haven't really discussed what meta-data is necessary, but this is a bare minimum. It should be at least a superset of what RPM knows, though.
This is fine to hear. Is it also planned that I can check for a certain version of library or so? Let's say I need libtk.so.8.1.0 and not libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement this tests?
No, that's not in the plan and it's a known weakness. Checking dependencies on Python itself and other Python modules should be doable, so that's what I think should be done. Open the door beyond that and you fall into a very deep rat-hole indeed -- a rat-hole that RPM takes care of quite nicely, and you argued very cogently that we should *not* duplicate what RPM (and others) already do! Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
On Thu, Feb 04, 1999 at 04:44:21PM -0500, Greg Ward wrote:
valuable! (So I can't understand why you-as-developer advocate pushing work onto the packager. >grin<)
Well, it if the things I wrote up for the packager would be the packagers only job, then I would have saved a lot of work compared to the current situation. ;-)) And on the other hand I know how someone feels if he has to determine how to compile a extension on a system he has no access too. ;-)
[...Greg gets a sinking feeling...]
Hopefully, this is not so bad, that you can't continue your good work? But sometimes you have to face reality as it is. ;-))))
This raises the danger of "yet another Makefile-that's-not-a-Makefile to edit" though. Yuck. Maybe we could call it "Setup" so existing Python hackers think they know what it is. ;-)
That is a little bit what I have seen to come up. ;-)))
we definitely need some expertise with Solaris 'pkg', the various Mac and Windows solutions, etc.
This would be helpful. For all packaging systems I know of, this is the way packages are build.
Oh: implementing a "bdist" command to create "dumb" built distributions (eg. a tar.gz or zip file) should still be pretty easy, though -- just tar/zip up the blib directory (minus documentation: hard to say where that should be installed, since there's no standard for module documentation). So *that* doesn't have to be punted on.
I would also leave inside the distutils package, cause this is something that we can easily deal with.
Sounds about right to me -- we haven't really discussed what meta-data is necessary, but this is a bare minimum. It should be at least a superset of what RPM knows, though.
I would suggest to have at least this information in the meta data - Name of the developer - Name of the package - package version - a one sentence - one line summary - a description field - a copyright notice, that means something like a short identifier for the used licence.
This is fine to hear. Is it also planned that I can check for a certain version of library or so? Let's say I need libtk.so.8.1.0 and not libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement this tests?
No, that's not in the plan and it's a known weakness. Checking dependencies on Python itself and other Python modules should be doable, so that's what I think should be done. Open the door beyond that and you fall into a very deep rat-hole indeed -- a rat-hole that RPM takes care of quite nicely, and you argued very cogently that we should *not* duplicate what RPM (and others) already do!
Hm.. this is something I would have expected that distutils include. I have seen distutils a little bit like autoconf or so for Python in Python. Not as powerful, but with some basic features of autoconf. Cause this is information the developer can easily provide. But if this is possible to implement is the other question. I hope I haven't been to pessimistic or so. Bye, Oliver -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/
Quoth Oliver Andrich, on 04 February 1999:
[I dismiss dealing with dependencies on non-Python stuff]
Hm.. this is something I would have expected that distutils include. I have seen distutils a little bit like autoconf or so for Python in Python. Not as powerful, but with some basic features of autoconf. Cause this is information the developer can easily provide. But if this is possible to implement is the other question.
There are some possible partial solutions: list C libraries that this module depends on, and if a test program compiles and links, it's there. Ditto for header files. But testing versions? Things more complicated than normal C libraries? You'd need a way for developers to supply chunks of test C code, just the way autoconf does it... ugh... Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 x287 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
On Mon, Feb 08, 1999 at 06:09:51PM -0500, Greg Ward wrote:
There are some possible partial solutions: list C libraries that this module depends on, and if a test program compiles and links, it's there. Ditto for header files. But testing versions? Things more complicated than normal C libraries? You'd need a way for developers to supply chunks of test C code, just the way autoconf does it... ugh...
Well, I think we can go two ways here. First we try to cover each and every detail with the distutils but this seems to be way to much, or we can provide what I believe is the core of the distutils, and which is clearly described in the proposed interface. (With slight motifications of course. ;-) I know what I have written before but after having some time to think about it, I decided that we should add a meta level information, that looks something like that setup.py meta --requierments and which prints a free form text written by the developer where he states, what things he needs. As an installer and packager you should be capable to tell if your system satisfies the requirements. Bye, Oliver Andrich -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/
Greg Ward wrote:
Quoth Oliver Andrich, on 04 February 1999:
This is fine to hear. Is it also planned that I can check for a certain version of library or so? Let's say I need libtk.so.8.1.0 and not libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement this tests?
No, that's not in the plan and it's a known weakness. Checking dependencies on Python itself and other Python modules should be doable, so that's what I think should be done. Open the door beyond that and you fall into a very deep rat-hole indeed -- a rat-hole that RPM takes care of quite nicely, and you argued very cogently that we should *not* duplicate what RPM (and others) already do!
Actually there was some discussion of an 'external dependency' system. The idea is that we'd like Python extensions to check for some common libraries that they rely on. For these common external libraries (or programs) we could provide standard modules that simply check if the library is there (and return true or false or some failure message). Initially the distutils package could supply external dependency modules for some libraries (such as for GNU readline). Eventually the developer or packagers could start to supply these things (we could initiate a central archive for them or bundle them with the distribution so other developers or packagers don't have to reinvent the wheel). After that, the developers of these external libraries themselves might start providing them. :) An external dependency checking system could be as simple or complicated as one likes. A simple system would simply check if libfoo.1.5 is there. A more complicated system might look for libfoo.1.5 and upwards. The simplest system ever is when it automatically gets installed when libfoo1.5. is installed -- doesn't need much checking code then. :) What probably is out of scope is autoconfig style behavior; if we can't find libfoo, we are still able to use libbar to provide the same functionality. Unless this somehow follows easily from the design, we shouldn't aim for this. Anyway, this was just discussion. We haven't worked out the implications of a external dependency system fully yet. How useful would it be? How platform independent would or should it be? How do the external dependency modules get distributed? With distutils? With distutils packages? With the external libraries? Where these external dependency modules stored? The advantage of Python is that we have a full fledged programming language at our hands to do configure style things. This can make things more messy, but it can also definitely make some things trivial that are hard to do with make like systems (especially if there are some suitable Python modules that help). We shouldn't be *too* scared of rebuilding configure functionality; we shouldn't be too worried about developers/packages having to learn yet another make language; it'd just be Python and if there are enough batteries included it shouldn't be hard. Regards, Martijn
participants (4)
-
Greg Ward
-
Martijn Faassen
-
Oliver Andrich
-
Oliver Andrich