[Distutils] The $0.02 of a packager ;-)

Oliver Andrich oli@andrich.net
Tue, 2 Feb 1999 22:53:46 +0100


Hi,

after being invited by Greg Ward to this list, I tried to read all documents
available concerning the topic of the list and I also tried to get uptodate
with the mailing list by reading the archives. Now I like to give my $0.02
as someone who tries to keep a quite big distribution uptodate which also
includes a lot of third party modules. This mail is rather long,
cause I like to comment all all things in one mail, cause I think that quite a
lot relates to each other.

And please keep in mind that I am speaking as an enduser in one or another
way. That means I will have to use this as a developer and will have to use it
as a packager of third party developer. So I won't comment on any
implementation issue where it is not absolutely critical.

And please keep also in mind that may be I am talking about things you have
already talked about, but I am quite new to this list and I only read
the archives of this and the last month. ;-)

- Tasks and division of labour

  I think that the packager and installer role are a little bit mixed up. In
  my eyes the packagers job is to build and test the package on a specific
  platform and also build the binary distribution. This is also what you wrote
  on the webpage.

  But the installers role should only be to install the prebuilt package,
  cause his normal job is to provide uptodate software that the users of the
  system he manages use. And he has enough to do with that.

- The proposed Interface 

  Basically the that I can say that I like the idea that I can write in my RPM
  spec file

  setup.py build
  setup.py test
  setup.py install

  and afterwards I have installed the package somewhere on my machine. And I
  am absolutely sure that it works as indented. I think that this is the way
  it works for most Perl modules. 

  But I have problems with bdist option of setup.py cause I think that this is
  hard to implement. If I got this right I as a RPM and debian package
  maintainer should be able to say

  setup.py bdist rpm
  setup.py bdist debian

  And afterwards I have a debian linux and rpm package of the Python package.
  Nice in theory but this would require that setup.py or the distutils
  packages how to create these packages, that means we have to implement a
  meta packaging system on top of existing packaging systems which are
  powerful themselves. So what would it look like when I call these commands
  above?

  Would the distutils stuff create a spec file (input file to create a rpm)
  and then call rpm -ba <specfile>? And inside the rpm build process setup.py
  is called again to compile and install the packages content? Finally rpm
  creates the two normal output files, which are the actual binary package and
  the other is the source rpm from which you can recompile the binary package
  on your machine. 

  This is the same for debian linux, slackaware linux, rpm based linux
  versions, Solaris packages and BeOS software packages. The last is only a
  vague guess cause I only looked into the Be system very shortly.

- What I would suggest what setup.py should do 

  The options that I have no problem with are 

  build_py    - copy/compile .py files (pure Python modules)
  build_ext   - compile .c files, link to .so in blib
  build_doc   - process documentation (targets: html, info, man, ...?)
  build       - build_py, build_ext, build_doc
  dist        - create source distribution
  test        - run test suite
  install     - install on local machine
  
  What should make_blib do ?

  But I require is that I can tell the build_ext which compiler switches to
  use, cause may be I need on my system different switches then the original
  developer can use.

  I also like to provide the install option with an argument to tell where the
  files should be installed, cause I can tell rpm for example that it should
  compile the extension package as if it would be installed in
  /usr/lib/python1.5 but could it in the install stage to install it in
  /tmp/py-root/usr/lib/python1.5. So I can build and install the package
  without overwriting an existing installation of a older version and I also
  have a clean way to determine what files actually got installed.

  install should also be split up into install and install_doc and installdoc
  should also be able to take an argument where I tell it where to install the
  files to. 

  I would remove the bdist option cause it would introduce a lot of work,
  cause you not only have to tackle various systems but also various packaging
  systems. I would add an option files instead which returns a list of files
  this packages consists of. And consequently an option doc_files is also
  required cause I like to stick to the way rpm manages doc files, I simply
  tell it what files are doc files and it installs them the right way.

  Another that would be fine if I could extract the package information with
  setup.py. Something like setup description returns the full description and
  so on.

  And I would also add an option system to the command line options, cause I
  like to tell the setup.py script an option from which it can determine on
  which system it is running. Why this is required will follow.

- ARCH dependent sections should be added

  What is not clear in my eyes, may be I have missed something, but how do you
  deal with different architectures? What I would suggest here is we should
  use a dictionary instead of plain definitions of cc, ccshared, cflags and
  ldflags. Such a dictionary may look like that

  compilation_flags = { "Linux" : { "cc": "gcc", "cflags": "-O3", ...},
                        "Linux2.2": { "cc": "egcs", ....},
                        "Solaris": { "cc": "cc", ....}
  }

  And now I would call setup.py like that

  setup.py -system Linux build

  or whatever convention you want to use for command line arguments.

- Subpackages are also required

  Well, this is something that I like very much and what I really got
  accustomed to. They you build PIL and also a Tkinter version that supports
  PIL, then like to create both packages and also state that PIL-Tkinter
  requires PIL. 
  
Conclusion (or whatever you want to call it)

  I as a packager don't require the distutils stuff to be some kind of meta
  packaging system that generates from some kind of meta information the
  actual package creation file from which it is called again. And I don't
  believe that have to develop a complete new packaging system, cause for
  quite a lot systems such systems exist. And I also think that we introduce
  such a system the acceptance wouldn't be very high. The people want to
  maintain the software basis with their natural tools. A RedHat Linux user
  would like to use rpm, a Solaris user would like to use pkg and a
  WindowsUser would like to use INstallShield (or whatever is the standard).

  The target of distutils should be to develop a package which can be
  configured to compile and install the extension package. The developed
  software should be usable by the packager to extract all required
  information to create his native package and the installer, should use the
  prebuilt packages at best or should be able to install the package by
  calling setup install. 


I hope that I described as good as possible what I require as packager and I
think that is not a business of distutils but of the native packaging system.
Any comments are welcome and I am willing to discuss this, as I am absolutely
aware that we need a standard way of installing python extensions.

Best regards, 

    Oliver

-- 
Oliver Andrich, RZ-Online, Schlossstrasse Str. 42, D-56068 Koblenz
Telefon: 0261-3921027 / Fax: 0261-3921033 / Web: http://rhein-zeitung.de 
Private Homepage: http://andrich.net/