On Thu, Feb 04, 1999 at 10:37:29AM -0500, Greg Ward wrote:
Yes, there is a lot of overlap there. So why is the packager wasting his time building this RPM (or some other kind of built distribution)? Because *this* installer is an oddball running a six-year-old version of Freaknix on his ancient Frobnabulator-2000, and Distutils doesn't support Freaknix' weird packaging system. (Sorry.) So, like every good Unix weenie, he starts with the source code and installs that. [...]
Ok, I see the reason for this definition and I can absolutely agree on it. I myself was such an oddball myself until I had to manage a software state for quite a lot of machines. ;-)))
Also, building from source builds character (as you will quickly find out if you ask "Where can I get pre-built binaries for Perl?" on comp.lang.perl.misc ;-). It's good for your soul, increases karma, reduces risk of cancer (but not ulcers!), etc.
;-)))
That's the basic idea, except it would probably be "bdist --rpm" -- 'bdist' being the command, '--rpm' being an option to it. If it turns out that all the "smart packagers" are sufficiently different and difficult to wrap, it might make sense to make separate commands for them, eg. "bdist_rpm", "bdist_debian", "bdist_wise", etc. Or something like that.
I don't think that this is the right way if you want to do this. I think that setup.py should behave quite pythonish in this situation, that means it should be called as you defined it cause this describes it or models it in a better way. [...]
I haven't yet thought through how this should go, but your plan sounds pretty good. Awkward having setup.py call rpm, which then calls setup.py to build and install the modules, but consider that setup.py is really just a portal to various Distutils classes. In reality, we're using the Distutils "bdist" command to call rpm, which then calls the Distutils "build", "test", and "install" commands. It's not so clunky if you think about it that way.
Also, I don't see why this constitutes "building a meta packaging system" -- about the only RPM-ish terrain that Distutils would intrude upon is knowing which files to install. And it's got to know that anyways, else how could it install them? Heck, all we're doing here is writing a glorified Makefile in Python because Python has better control constructs and is more portable than make's language. Even the lowliest Makefile with an "install" target has to know what files to install.
Hm... I am not quite sure what position I should take here. The position of the developer or the packager. But let's discuss this. ;-)) Let's assume we implement bdist --rpm and so on. What is the job of the developer and the packager? The developer has to provide the setup.py and with it all the meta information he thinks that is useful for building and installing his extension, but also for building the package as a binary distribution. Let us now look at the process of packing an extension module. And let's us keep outside that the extension should be compiled on half a dozen platforms. What does actually happen when the packager builds the distribution, let's take PIL as an example. The packager (me) calls setup.py bdist --rpm and these what goes wrong and tweacks such things as wrong include paths, wrong library names, evil compilation command switches and so on. Afterwards building the distribution might look like that. setup.py build --include-path="/usr/include/mypath /usr/local/include/tk/" \ --libraries="tcl tclX mytk" \ --cflags="$RPM_OPT_FLAGS -mpentiumpro" setup.py install --install-dir="/tmp/pyroot/usr" setup.py bdist --install-dir="/tmp/pyroot/usr" --rpm And this contradicts the actual rpm bnuilding process, cause rpm what's to bne able to build the package itself. Or I have to edited the setup.py file to do my changes, cause normally the building process for a rpm looks like that. Step 1) create a rpm spec file Step 2) call rpm -ba <spec file> Step 2.1) unpack sources Step 2.2) compile sources Step 2.3) install binaries Step 2.4) package the files Step 3) install the package I have to edited setup.py, then we have the same problems as we have with Makefiles. Another problem that arises by letting setup.py or distutils be able to create an RPM itself without editing the spec file myself, is how about dependencies? How can the developer know anything about the packages that are required on my system or on my version of my system? How can he know for example that my packages require Tkstep instead of Tk, or that my PIL package requires libjpeg-6b and not just the package jpeg-6b? I don't think that building the actual package should be the job of disutils or so, cause it introduces much work I as a developer don't want to take care of, cause I don't care how the package system of distribution x of Linux works and how Sun changes it package system in the next version. What I as a developer want to do is, to provide a way so that my extension compiles and installs in the right way on all my target platforms and that I am able to add new platforms from user information. I as packager don't like to learn how to edited a new type of Makefile, which in some way or so must also introduce a new meta level wrapping my actual packing system that I am very accustomed too. It is for me much easier to start out with some kind of summy RPM spec file or so, that has all basic setup.py calls already included and tweak the RPM options in the RPM file and not in some kind of new file. Hoepfully I don't annoy you or tell you something you already have looked at, and decided that it is a minor problem. But let's look at late 1999, distutils stuff has been release and all the world is using it, but guess what would happen in my eyes. Most of the people would use the features a installer has and should use, but stick to their traditional way of packing the binary distribution. I mean all package system without an actual config file driven method can be build from with distutils, but all the other will encounter problems and will be forced to deal with stuff that they don't like to deal with.
This is the same for debian linux, slackaware linux, rpm based linux versions, Solaris packages and BeOS software packages. The last is only a vague guess cause I only looked into the Be system very shortly.
The open question here is, How much duplication is there across various packaging systems? Definitely we should concentrate on the build/test/dist/install stuff first; giving the world a standard way to build module distributions from source would be a major first step, and we can worry about built distributions afterwards.
Yes, that is definetly the case. If I would be able to easily build an extension without reading a README each and every time and without having to deal with configuration parameters that differ each time, then this would help me a lot already.
"make_blib" just creates a bunch of empty directories that mimic something under the Python lib directory, eg. [...]
I see, this is definetly a useful command for setup.py.
install should also be split up into install and install_doc and installdoc should also be able to take an argument where I tell it where to install the files to.
Another good idea. Actually, I think the split should be into "install python library stuff" and "install doc"; "install" would do both. I *don't* think that "install" should be split, like "build", into "install_py" and "install_ext". But I could be wrong... opinions?
This is fine for me.
I would remove the bdist option cause it would introduce a lot of work, cause you not only have to tackle various systems but also various packaging systems. I would add an option files instead which returns a list of files this packages consists of. And consequently an option doc_files is also required cause I like to stick to the way rpm manages doc files, I simply tell it what files are doc files and it installs them the right way.
Punting on bdist: ok. Removing it? no way. It should definitely be handled, although it's not as high priority as being able to build from source. (Because obviously, if you can't build from source, you can't make a built distribution!)
Ok, but I keep my opinion on the all in one solution. ;-)) But I have to think about it.
The option(s) to get out list(s) of files installed is good. Where does it belong, though? I would think something like "install --listonly" would do the trick.
Yep this is fine.
Another that would be fine if I could extract the package information with setup.py. Something like setup description returns the full description and so on.
I like that -- probably best to just add one command, say "meta". Then you could say "./setup.py meta --description" or "./setup.py meta --name --version". Or whatever.
What I like to see is setup meta --name --version --short-description --description
Already in the plan. Go back and check the archives for mid-January -- I posted a bunch of stuff about design proposals, with how-to-handle- command-line-options being one of my fuzzier areas. (Eg. see http://www.python.org/pipermail/distutils-sig/1999-January/000124.html and followups.)
Ok, I look into this.
- ARCH dependent sections should be added
What is not clear in my eyes, may be I have missed something, but how do you deal with different architectures? What I would suggest here is we should use a dictionary instead of plain definitions of cc, ccshared, cflags and ldflags. Such a dictionary may look like that
Generally, that's left up to Python itself. Distutils shouldn't have a catalogue of compilers and compiler flags, because those are chosen when Python is configured and built. That's the autoconf philosophy -- no feature catalogues, just make sure that what you try makes sense on the current platform, and let the builder (of Python in this case, not necessarily of a module distribution) override if he needs to. Module packagers and installers can tweak compiler stuff a little bit, but it's dangerous -- the more you tweak, the more likely you are to generate shared libraries that won't load with your Python binary.
Ok, this is fine.
The plan for Distutils is to handle module dependencies from the start, because that lack caused many different Perl module developers to have to write Makefile.PL's that all check for their dependencies. That should be handled by MakeMaker (in the Perl world) and by Distutils (in the Python world).
This is fine to hear. Is it also planned that I can check for a certain version of library or so? Let's say I need libtk.so.8.1.0 and not libtk.so.8.0.0 or is this kept anywhere else? How do you like to implement this tests? Bye, Oliver -- Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de Private Homepage: http://andrich.net/