We have some a zc.buildout-based deployment system, and we're looking at
how we can make the process slightly easier for moving between
development, testing and production environments.
I was wondering if others have a pattern for setting up a buildout in
this way? I am imagining that this is better done by running the
different steps, but putting the outputs in different locations (e.g.
for testing, put apache configurations in some directory in the test
tree), rather than trying to selectively run steps (e.g. for testing, i
do not want apache configurations set up). Is there a useful
buildout.cfg pattern that covers this?
Also, one other use-case that we've got is to be able to group parts
together into sets. So, for example, I'd lie to be able to do something
parts = awstats
parts = awstats-download apacheconf-install crontab-install
which would tell buildout to run the three parts listed, rather than
remembering all 3 specific parts in the parts list. The advantage being
to ensure the bits are always run together.
Has anyone done anything like this at all?
I've always built my python packages as a folder, with a __init__.py,
any other python bits and then the miscellany (readme.txt, etc) in the
For distributions, I've historically tarred up the folder and put it up
to download with installation instructions to just untar somewhere on
the python path.
However, I'd now like to distribute one of my products as an egg, and so
thought I'd use setuptools to build the source distros as well.
The package is mailinglogger and you can see the svn trunk here:
The setup.py there is wrong in that the egg is builds only contains the
'tests' subpackage and none of the other modules, and no mailinglogger
package to boot.
Given the component.xml in there, I'm pretty sure I need to add a
zip_safe=True, but what else do I need to do to get setup.py to
correctly build an egg and a source distro?
Simplistix - Content Management, Zope & Python Consulting
At 10:59 AM 8/17/2007 +0100, Luis Bruno wrote:
>The really *big* -1 this has is that I'm basically gonna be using
>--single-version-externally-managed eggs (which makes it impossible to
>have multiple "inactive" versions and require() them, if I understood
>Phillip Eby correctly).
You can have inactive versions and require() them, they just have to
be .egg files or directories. You can have a "default" version
that's installed --single-version, e.g. by a system package manager
such as RPM.
>I was thinking "sync local" re-gets the repository's Packages
>master-list. Then you read in the locally installed ones (which is a
>matter of traversing sys.path and looking for the .egg-info files; I
>think those are now (as of 2.5) expected to be there.
Please, please, *please* use the published APIs in pkg_resources for
this. Too many people are writing tools that inspect egg files and
directories directly -- and get it only partly right, making
assumptions about the formats that aren't valid across platforms,
Python versions, etc., etc.
In general, if you are doing absolutely *anything* with on-disk
formats of eggs, and you didn't read enough of the docs to find the
equivalent APIs, it's a near-certainty that you don't understand the
format well enough to write your own versions. Meanwhile,
pkg_resources is proposed for inclusion in the Python 2.6 stdlib, so
it's not like it's going to be hard to get a hold of.
In this particular example, by the way, if you want to find all
locally installed packages, you probably want to be using an
Environment instance, which indexes all installed packages by package
name, and gives you objects you can inspect in a variety of ways,
including using .get_metadata('PKG-INFO') calls to read the .egg-info
files -- or .egg-info/PKG-INFO, or EGG-INFO/PKG-INFO, or whatever
file is actually involved. (This is why you need to use the API --
there are a lot of devils in the details.)
>I think easy_install -f <url> can work against an Apache directory
>I thought that was the whole point behind it, really.
One of them, anyway. There are other aspects besides -f that work
for directory indexes, such as PyPI "home page" and "download" URL links.
What happens if a user specifies a URL via a '-f' option on the
easy_install command line but our published eggs specify
dependency_links in the setup.py? Will easy_install search the user's
specified location first, last, or will it not search them at all?
(I'm hoping for the first!)
I'm asking because we want to provide binaries (eggs) of our projects
for various platforms (RHEL3, RHEL4, RHEL5, Ubuntu 7.04, etc.) all of
which generate the same egg name, but our projects include C /C++
extensions that require compilation for the specific platform, so we've
structured an egg repository that allows us to separate these
equivalently named eggs by the actual target platform. We also want
users to be able to install from source tar/zip balls for platforms we
haven't built binaries for yet, so our eggs all include dependency_links
to point at our egg repo. Due to issues with build management, we
aren't currently listing the target platform URLs in each egg's
setup.py's dependency_links when it gets built on that target platform.
It just has the source tar/zip ball repository's URL.
But early indications are that when a user installs by doing something like:
easy_install -f <enthought platform specific repo url> ets==2.5b2
then the first egg gets found properly in the platform specific repo,
but all of its dependencies are only being searched for in the source
repo. Is this just a misinterpretation of the output generated by
BTW, it would be nice if there was an option to name your generated eggs
based on a full specification of OS and version, hardware arch, etc.
And of course, the whole setuptools universe of tools would need to know
to search for eggs with that level of accuracy before dropping down to
more generic specs.
Another similar use case: Some of our users want to locally cache eggs
for distribution within their corporate environment for various reasons
(eggs tested and approved to work with their corporate desktop standard,
etc.) Those users would also be specifying a '-f' option to
easy_install to point at their corporate cache. However, all of our
eggs will have dependency_links specified. Will these corporate users
be able to install and pull things from their local repository, or will
easy_install always look at our dependency_links' specified locations
It doesn't seem convenient to force them to have to remember to provide
a '-H' setting, though I guess their IT group could create an alias or
batch script that does this for them. I also guess that their IT group
could generate their own version of easy_install that customizes /
hard-codes the URLs that it would look at, but that seems like kind of a
long way to go to solve this problem.
[ I originally posted to catalog-sig(a)python.org, but Jim Fulton
recommended I cross-post to this list to as many of the ideas also
deal with distutils/setuptools. Please cc replies to catalog-
sig(a)python.org as well. ]
I think there's a lot to gain for Python by improving PyPI, and I'm
willing to help. I did help a bit with PyPI at last year's
EuroPython sprint, and was then made aware of http://wiki.python.org/
moin/CheeseShopDev - is this the most up-to-date plans for PyPI?
If you're in a hurry and don't want to read everything;
1) I've created a little app to help prototype how we can do better
egg/package management at http://contrib.exoweb.net/trac/browser/egg/
2) I'd like feedback, and pointers to how I can help more.
Basically, the problems I would like to work on solving are:
1) Simplifying/enabling discovery of packages
2) Simplifying/enabling management of packages
3) Improving quality and usefulness of package index
From a usability point-of view I'd like to focus on the requirements
for the Python newbie, someone that has just discovered Python, but
is probably used to package management systems from Linux
distributions, FreeBSD, and other dynamic languages like Perl and
Ruby (these are also the systems I have experience with, so I'm
pulling ideas from them).
Ideally everything should be (following Steve Krug's "Don't Make Me
Think" recommendations) self-evident, and if that's not possible, at
least self-explanatory. Someone put in front of a keyboard without
having read any docs should be able to find, install, manage, and
perhaps even create Python packages. Better usability will of course
benefit everyone, not just beginners. I'm frankly amazed at how
people that have programmed Python for years don't really know or use
PyPI. I'm convinced making more of Python package system
discoverable and easily accessible will greatly improve the adoption
of Python, the number of Python packages, and the quality of these
I think the typical use cases would be (in order of importance, based
on what a typical user would encounter first):
* Find available eggs for a particular topic online
* Get more information about an egg
* Install an egg (and its dependencies)
* See which eggs are installed
* Upgrade some or all outdated eggs
* Remove/uninstall an egg
* Create an egg
* Find eggs that are plugins for some framework online
So, first of all we'll need either one command, or a set of similarly
named commands, to do discovery, installation, and management of
packages, as these are common end-user actions. Creation of packages
is a bit more advanced, and could be in another command. If there's
general agreement that Python eggs is the future way of distributing
packages, why not call the command "egg", similar to the way many
other package managers are named after the packages, e.g., rpm, port,
gem? I'll assume that's the case.
Next, where do you find eggs? This might not be a big issue if the
"egg" command is configured properly by default, but I'd offer my
thoughts. I know the cheeseshop just changed name back to PyPI
again. In my opinion, neither of the names are good in that they
don't help people remember; any Monty Python connection is lost on
the big masses, and PyPI is hard to spell, not very obvious, and a
confusing clash with the also-prominent PyPy project. Why not call
the place for eggs just eggs? I.e., http://eggs.python.org/
So we'd have the command "egg" for managing eggs that are by default
found at "eggs.python.org". I think it's hard to make Python package
management more obvious that this. The goal is to get someone that
is new to Python to remember how to get and where to find packages,
so obvious is a good thing.
THE COMMAND LINE PACKAGE MANAGEMENT TOOL
The "egg" command should enable you to at least find, show info for,
install, and uninstall packages. I think the most common way to do
command line tools like this is to offer sub-commands, a la, bzr,
port, svn, apt-get, gem, so I suggest:
egg - list out a help of commands
egg search - search for eggs (aliases: find/list)
egg info - show info for egg (aliases: show/details)
egg install - install named eggs
egg uninstall - uninstall eggs (aliases: remove/purge/delete)
so you can do:
egg search bittorrent
to find all packages that have anything to do with bittorrent (full-
text search of the package index), and then:
egg install iTorrent
to actually download and install the package.
I've built a prototype for a command that works this way,
implementing most (except the last) of the use cases at least
partiall. You can give it a go as follows:
# install prerequisities on your platform
# e.g., sudo apt-get install python-setuptools sqlite3 libsqlite3-0
svn co http://contrib.exoweb.net/svn/egg/
sudo python setup.py develop # should install storm for you
gzip -dc pypi.sql.gz | sqlite3 ~/.pythoneggs.db # bootstrap cache
egg sync # update cache
It's still incomplete, lacking tests, might only work on unix-y
computers, and is lacking support for lots of features like
activation/deactivation, and upgrades, but it works for basic stuff
like finding, installing, and uninstalling packages.
Summary of the design:
* Local and PyPI package information is synchronized into a local
sqlite database for easy access
* Storm is used for ORM (but could easily be changed)
* Installation is handled by passing off the "egg install" command
* I'm using a non-standard command-line parser (but could easily be
* For interactive use on terminals that supports it: colorizes and
adjusts text to fit
While doing the synchronization with PyPI I discovered a couple of
issues, described below, that makes the application unfit for common
use yet. (Eg., it has to query the PyPI for each of the packages.)
Most subcommands take arguments that can be a free mix of set names
and query strings. I thought this would make for the most forgiving
and user-friendly interface. These are filters; by default all eggs
SETS: Eggs have a few attributes that can be used to limit to a
subset of all eggs, e.g., whether it is installed, active, oudated,
local, or remote. Specifying several of these creates a join of the
sets, it further limits the number of eggs.
QUERY STRINGS: If none of the set names are matched, the argument is
assumed to be a query string. Many subcommands like "search" do a
full-text search of the package cache database. Others, like "list",
will do a substring match of package names. Others, like "install"
will require you to match the name exactly. You can specify a
specific version by adding a slash, e.g., "name/version".
Here are some example commands:
egg list installed sql - list all installed eggs having sql in
egg search installed sql - list all installed eggs mentioning sql
anywhere in the package metadata
egg list oudated installed - list all outdated installed eggs
egg list oudated active - list all outdated and active (and
egg uninstall outdated - uninstall all oudated eggs
egg info pysqlite - show information about pysqlite
egg info pysqlite/2.0.0 - show information about version 2.0.0 of
egg sync local - rescan local packages and update cache db
PYPI IMPROVEMENT SUGGESTIONS
While doing the application I discovered one important missing
feature: PyPI doesn't offer a way to programatically bulk-download
information about all eggs, as is customary for many other packaging
systems. This means "egg sync" will have to fetch the information
for each package individually. I think it wouldn't be hard to offer
a compressed XML file with all of the package information, suitable
A minor nuiscence is that there's no way to get only eggs/
distributions; PyPI lists packages, and some packages don't even have
any eggs. The "egg" command will try to download each of these empty
packages at each sync (since it treats empty packages as "packages
for which we haven't downloaded eggs for yet"). It might be better
to list eggs/distributions instead of packages.
There's a lot of opportunity in improving the consistency and
usefulness of package metainformation. Once you have it all sync'ed
to a local SQlite database and start snooping around, it'll be pretty
obvious; very few packages use the dependencies etc. (In fact, I
think the dependencies/obsoletes definitions are overengineered; we
could get by with just a simple package >= version number).
Many people use other platform-specific packaging system to manage
Python packages, probably both because this gives dependencies to
other non-Python packages, but also because PyPI hasn't been very
useful or easy to use. It may even be asked what the role of PyPI is
since it's never going to replace platform-specific packaging
systems; then should it support them? How? In any case, installing
Python packages from different packaging systems would result in
problems, and currently "egg" can't find Python packages installed
using other systems. ("Yolk" has some support for discovering Python
packages installed using Gentoo.)
Optional: These days XMLRPC (and the WS-Deathstar) seems to be losing
steam to REST, so I think we'd gain a lot of "hackability" by
enabling a REST interface for accessing packages.
Eventually we probably need to enforce package signing.
It'd be good for "egg" to support both system- and user-wide
configurations, and to support downloading from several package
indexes, like apt-get does.
Perhaps "egg" should keep the uninstalled packages in a cache, like
apt-get and I believe buildout.
Perhaps "egg" should provide a simple web server to allow browsing
(and perhaps installation from) local packages (I believe the Ruby
guys have this). If this web server should be discoverable via
Bonjour/Zeroconf, then all that's needed to set up a cache of PyPI is
to run an egg server (that people on the net auto-discovers) and
regularly download all packages.
How could "egg" work with "buildout"? Should buildout be used for
project-specific egg installations?
There area lot of other ideas and thoughts in the TODO.txt file of
the egg command, and I agree it's a good idea to join forces
(distutils, setuptools, yolk, enstaller, PythonEggTools, ...).
(Since my email was a bit long and wide I'm trying to update the
subject when the response is rather focused.)
On Aug 15, 2007, at 06:37, Paul Boddie wrote:
> Bjørn Stabell wrote:
> I've been moderately negative about evolving a parallel
> infrastructure to
> other package and dependency management systems in the past, and
> I'm not
> enthusiastic about things like CPAN or language-specific
> equivalents. The
> first thing most people using a GNU/Linux or *BSD distribution are
> likely to
> wonder is, "Where are the Python packages in my package selector?"
> There are exceptions, of course. Some people may be sufficiently
> in the ways of Python, which I doubt is the case for a lot of
> people looking
> for packages. Others may be working in restricted environments
> where system
> package management tools don't really help. And people coming from
> Perl might
> wonder where the CPAN equivalent is, but they should also remind
> what the system provides - they have manpages for Perl, after all.
> I've read through the text that I've mercilessly cut from this
> response, and I
> admire the scope of this effort, but I do wonder whether we
> couldn't make use
> of existing projects (as others have noted), and not only at the
> Python-specific level, especially since the user interface to the
> "egg" tool
> seems to strongly resemble other established tools - as you seem to
> admit in
> this and later messages, Bjørn.
> I was thinking of re-using the Debian indexing strategy. It's very
> perhaps almost quaintly so, but a lot of the problems revealed with
> current strategies around PyPI (not exactly mitigated by bizarre
> constraints) could be solved by adopting existing well-worn
> If I recall correctly, the PEP concerned just "bailed" on the version
> numbering and dependency management issue, despite seeming to be
> inspired by
> Debian or RPM-style syntax.
> As I've said before, it's arguably best to work with whatever is
> there, particularly because of the "interface" issue you mention with
> non-Python packages. I suppose the apparent lack of an open and
> package/dependency management system on Windows (and some UNIX
> flavours) can
> be used as a justification to write something entirely new, but I
> that only very specific tools need writing in order to make existing
> distribution mechanisms work with Windows - there's no need to
> existing work from end to end "just because".
> Agreed. And by adopting existing mechanisms, we can hopefully avoid
> having to
> reinvent their feature sets, too.
> P.S. Sorry if this sounds a bit negative, but I've been reading the
> of the catalog-sig for a while now, and it's a bit painful reading
> about how
> sensitive various projects are to downtime in PyPI, how various
> have been devised with accompanying whisper campaigns to tell
> people where
> unofficial mirrors are, all whilst the business of package
> continues uninterrupted in numerous other communities.
> If I had a critical need to get Python packages directly from their
> authors to
> run on a Windows machine, for example, I'd want to know how to do
> so via a
> Debian package channel or something like that. This isn't original
> I'm sure that Ximian Red Carpet and Red Hat Network address many
There seems to be two issues:
1) Should Python have its own package management system (with
dependencies etc) in parallel with what's already on many platforms
(at least Linux and OS X)? Anyone that has worked with two parallel
package management systems knows that dependencies are hellish.
* If you mix and match you often end up with two of everything.
* It'll be incomplete because you can't easily specify
dependencies to non-Python packages.
2) If we agree Python should have a package management system, should
we build or repurpose some other one?
* I think it's a matter of pride and proof of concept to have one
written in Python. That doesn't mean we can't get ideas from others.
* It's also not that hard to do. The prototype I threw up took
one weekend + half a day, and consists of about 500 lines of new
code. It could be refactored and made smaller, but even if a
complete version is ten times the size of that, it's still not a huge
* With a Python version we could relatively easily innovate beyond
what traditional packaging systems do; ports and apt are pretty much
stagnated. I think RubyGems seems to have some cool features,
features that probably wouldn't have happened if they were using
ports or apt-get (but then they could piggyback on innovations in
those tools, I guess). If it works for them, why shouldn't it work
* It would have to be as portable as Python is; many packaging
systems are by nature relatively platform-specific.
* If we don't build our own, doesn't that mean we throw out eggs?
* Packaging systems are useful for mega frameworks like Zope,
TurboGears, and Django, and slightly less so for projects you roll on
your own, to manage distribution and installation of plugins and
addons. Relying on platform-specific packaging systems for these may
not work that well. (But I could be wrong about that.)
That said, it might be possible to do some kind of hybrid, for PyPI
to be a "meta package" repository that can easily feed into platform
specific packaging systems. And to perhaps also have a client-side
"meta package manager" that will call upon the platform-specific
package manager to install stuff.
It looks like, for example, ports have targets to build to other
systems, e.g., pkg, mpkg, dmg, rpm, srpm, dpkg. So maintaining
package information in (or compatible with) ports could make it easy
to feed packages into other package systems.
* Benefit: We're working with other package systems, just making
it easier to get Python packages into them.
* Drawback: They may not want to include all packages, at the
speed at which we want, or the way we want to. (I.e., there may
still be packages you'd want that are only available on PyPI.)
* Drawback: Some systems don't have package systems.
Which brings me to: If we're just distributing source files why don't
we use a source control system such as svn, bzr, or hg? The package
developers have trunk, PyPI is a branch, the platform-specific
package maintainers have a branch, and what's installed onto your
system is in the end a branch (serially connected). Some systems,
like Subversion, can also include externals like I did with cliutils
on the egg package. Just a thought.
We've gotten a number of problem reports from people trying to install
our egg releases where there doesn't seem to be enough information being
output to the console to really debug the problem. Here's an
illustrating log taken from a user's report:
C:\Documents and Settings\Gary>easy_install -f
Searching for ets
Best match: ets 2.5b2
Moving ets-2.5b2-py2.5.egg to c:\python25\lib\site-packages
Adding ets 2.5b2 to easy-install.pth file
Processing dependencies for ets
Searching for wx==2.6
Couldn't find index page for 'wx' (maybe misspelled?)
Scanning index of all packages (this may take a while)
No local packages or download links found for wx==2.6
error: Could not find suitable distribution for Requirement.parse
I've verified that our 'ets' egg does not even mention 'wx' in its
setup.py so something else on this user's system must be requiring it,
right? But there's no information here about where the requirement came
In fact the situation is even worse than that because easy_install
outputs the line about processing dependencies for ets, but the next
step is about some other egg's dependencies and not one of the 20+
dependencies actually listed in the 'ets' egg.
Can easy_install be updated to inform the user of which egg, or the
first one if there are many of them, that generated the requirement on
each thing it is trying to install? I'm thinking of a change in the
line that looks like:
Searching for wx==2.6
to something like
Searching for wx==2.6 (required by foo.bar-1.0)
It also seems like the error line should indicate which egg generated
the requirement for 'wx==2.6', right?
FYI - The issues discussed here combine problems specific to the enthought
system with problems using setuptools (that are used if available for
parts of the packaging for Kiva).
---------------------------- Original Message ----------------------------
Subject: Re: [Enthought-dev] problem easy_installing kiva: agg.i missing
From: "Stanley A. Klein" <sklein(a)cpcug.org>
Date: Tue, August 14, 2007 11:46 am
Cc: "Dave Peterson" <dpeterson(a)enthought.com>
I need to remove the files from the Kiva rpm for two reasons.
First, they don't belong in python/site-packages on a *nix system if
compliance with the *nix File Hierarchy Standard and/or the Linux
Standards Baxe (that cites the FHS) is desired. If they are in the rpm
they need to go somewhere else. I can fix that by declaring the docs
files in the setup.cfg file. I tried that fix, although I don't know if I
did it correctly and it didn't resolve both problems.
Second, they are causing trouble in running the bdist_rpm because their
.pyc and .pyo files are producing an "installed but unpackaged files"
error. That is somewhat peculiar to Fedora (and RedHat) because SE-Linux
needs to know what files are in sensitive directories or it may think they
might have been put there by an intruder and thus block access to them.
That problem may be due to the lack of __init__.py files (even blank ones)
in the relevant directories. Setuptools apparently looks for __init__.py
files to guide its processing. I could put in blank __init__.py files and
see what that does.
However, I would rather fix both problems (the packaging error and the
file placement) at the same time. I figured the easiest way to fix the
problem is to get the directories out of Kiva and into somewhere else (a
Kiva docs directory that includes tests, examples, and docs).
I possibly need to go back, put in the blank __init__.py files in a lot of
places (the problem is inconsistent and I can't figure out why that
happens) and declare the docs as going into the proper FHS place.
On Tue, August 14, 2007 8:58 am, Dave Peterson <dpeterson(a)enthought.com>
> Message: 1
> Date: Mon, 13 Aug 2007 20:23:31 -0500
> From: Dave Peterson <dpeterson(a)enthought.com>
> Subject: Re: [Enthought-dev] problem easy_installing kiva: agg.i
> Hi Stan,
> Unfortunately, I've never tried to do a bdist_rpm so I'm at a loss on
> what might be going on here. But if I had to guess, I'm wondering if
> setuptools uses the .svn dirs for its build process -- these won't be
> updated by a local svn mv that hasn't been committed yet so it wouldn't
> surprise me that even doing an rm -rf wouldn't get rid of the record of
> them. In fact, this seems like it would actively cause the error
> mentioned below -- you've removed the file but setuptools thinks they
> should exist since there is a record in svn.
> Why do they need to be removed from the rpm?
> -- Dave
> Stanley A. Klein wrote:
>> I svn-updated my enthought.branches/enthought.kiva.
>> The build worked.
>> I'm now back to having problems with unpackaged files and am working on
>> that issue, which is becoming very strange as well.
>> I have tried to get all the files not needed for Kiva operation out into
>> separate directory (called kiva_docs) at the enthought.branches level.
>> That includes the directories kiva/tests, kiva/agg/tests,
>> kiva/agg/examples, and kiva/agg/docs, plus some others that I also can't
>> seem to get rid of but don't seem to create the same error condition. I
>> did an "svn mv". I went in and did an "rm -rf" of the directories.
>> However, it seems that the files I thought I got rid of always seem to
>> keep getting found, included as files to be installed, and creating
>> problems with "installed but unpackaged files" error of rpm.
>> Stan Klein
>> On Mon, August 13, 2007 2:26 pm, Dave Peterson <dpeterson(a)enthought.com>
>>> Message: 4
>>> Date: Mon, 13 Aug 2007 12:41:24 -0500
>>> From: Dave Peterson <dpeterson(a)enthought.com>
>>> Subject: Re: [Enthought-dev] problem easy_installing kiva: agg.i
>>> I'm curious if you've tried this on the latest unstable kiva? The svn
>>> version is currently in branches and the source tarball is in the
>>> unstable dir. I'm talking about enthought.kiva-2.0.0b2.dev here.
>>> -- Dave
>>> Stanley A. Klein wrote:
>>>> I'm having a very similar problem when I try to do "python setup.py
>>>> bdist_rpm" on kiva. When I run "python setup.py build" it completes
>>>> properly. The bdist_rpm creates an rpm spec file with the same
>>>> (or by adding an option I can make it the same command). However,
>>>> the spec file runs, I get the error message about it not being able to
>>>> find agg_wrap.c.
>>>> My traceback (below) points to the error arising in the numpy
>>>> I wonder if this has anything to do with setuptools being svn aware.
>>>> can imagine a possible cause being some anomaly in the embedded svn
>>>> information for kiva.
>>>> I hope this additional information helps.
>>>> Stan Klein
>> [remainder deleted]
Stanley A. Klein, D.Sc.
Open Secure Energy Control Systems, LLC
8070 Georgia Avenue
Silver Spring, MD 20910
I'm only on day two or three of our great modernization and buildout
experiment. Today I decided to set up my user defaults and try out the
The documentation says:
If the file $HOME/.buildout/defaults.cfg, exists, it is read before
reading the configuration file
So I went ahead and made a "defaults.cfg" in my $HOME/.buildout
directory. But it didn't seem to do anything.
The problem is that this first mention of the user-defaults config
file uses the plural "defaults.cfg". The system itself expects the
singular "default.cfg", and the test/example code immediately
following the above line uses the singular 'default.cfg'.