Just thought I'd pass this along, since it took me a little while to
figure it out. Maybe this can go in docs somewhere, but I'm not sure where.
So, I'm writing some code that uses egg plugins, and thus I need testing
eggs. These need to be path-independent (since the checkouts might live
anywhere), with no setup commands (you shouldn't have to install the
testing version of the eggs to run the tests), and the eggs should be
available only when you are running the tests (no global installation).
I'm using py.test, so I add a conftest.py file which is loaded before
any tests are imported. It's important all this is done before
pkg_resources is imported (maybe there are methods in pkg_resources that
can fix things after it was imported, but pkg_resources uses sys.path
when it is imported, so if you adjust the path later then pkg_resources
won't notice it).
Anyway, here's the code I use:
here = os.path.dirname(__file__)
base = os.path.dirname(here)
fake_packages = os.path.join(here, 'fake_packages')
for egg_info_dir in glob.glob('%s/*/*.egg-info' % fake_packages):
At first I tried adding fake_packages to sys.path; didn't work at all.
But if I do site.addsitedir(fake_packages) then it would work. But this
requires an .egg-link file in fake_packages, and that file has to have
an absolute path (it can't be relative), but fake_packages could be
anywhere. So in the end, I just need to add all the necessary paths;
this means I can't test the case when --multi-version is used to install
an egg, but I guess I won't worry about that.
If you are curious about the base setup, I've checked in a minimal file
layout of the whole thing into
Ian Bicking / ianb(a)colorstudy.com / http://blog.ianbicking.org
I know this topic has briefly been covered before
but I'm having some trouble with it.
I have an in-house app that uses RuleDispatch (and of course
PyProtocols). I used to just build them from cvs myself, but now I've
switched to using the eggs. Since I'm doing all the development on my
machine, and no one else will see my scripts, I don't worry about
pkg_resources.require(), I just simply import 'dispatch' and
py2exe doesn't like this at all though. Whatever mechanism it uses for
pulling in dependencies doesn't work with eggs.
If there is a solution, I'd prefer *not* to have to do anything
specific in my setup.py, for each egg. As time goes on, I'd like to
move more and more of my site-packages to eggs, and I don't want to
have to modify my setup.py everytime I switch another library to an
Any solutions to this? I don't mind a solution that requires me to add
something to setup.py that can automatically find any of the eggs I
use, but I'd really like to avoid adding something to setup.py for
*each* egg I use.
Thanks in advance,
When developing a namespace distribution, where should I put it? Should
each such distribution be a separate project in the repository (e.g.,
code goes in Paste/exceptions/trunk/paste/exceptions/)? Can (or should)
I put them all together and let setuptools break them up when I
And will I have to move modules out of the namespace package and into
Ian Bicking / ianb(a)colorstudy.com / http://blog.ianbicking.org
Seems I was late to try out setuptools. Fantastic package, is my first
impression. But I have still a lot to read and tryout.
But here is a first question: The 'test' command runs 'build_ext -i' and
then the tests. Usually, I test my packages with several Python
Now, the problem is that the extension module is built in the source
directory, and is only compatible with exactly one python version.
This is on Windows, but I assume the problem would happen on other
platforms as well - the SF compilefarm uses the same home directory for
all the different platforms. Binaries compiled for OSX Power PC are not
compatible with Linux AMD64, for example - but they all would have the
This problem is not new, of course.
The workaround I have used for the ctypes project so far is a __path__
hack: the main module importing this extension tries to execfile() a
special file that is in CVS, but never distributed. This file uses
distutils to determine the name of the build directory, inserts this
Is there anything in setuptools to work around that problem, or are
there any plans or ideas how this could work?
I just tried to run the build of my project on a box that has
localized SVN messages (German). As the tag_svn_revision option is
enabled in the setup config, setuptools calls "svn info" and tries to
parse the output, looking for "Last Changed Rev: (\d+)" string. In
the german output this label is obviously different, so the parsing
will fail and the build will abort.
I can temporarily switch subversion to use English messages for the
$ export LC_MESSAGES=en_US
$ ./setup.py test
But still, this should probably be fixed in setuptools.
cmlenz at gmx.de
I was trying to use the distutils sdist target on an AFS mounted
filesystem on Linux. It fails because it can't make hard links between
directories on AFS (you can inside a directory). I have not been able
to find any options to make it copy the files instead.
1) Is there a way for me to force it to copy the files into the
temporary directory instead of using hard links?
2) Given that it is very hard to predict whether a particular usage of
hard links is possible on any given OS/filesystem/mount combination.
It seems to me that the best strategy would be to try to make the
hard links, and if this fails, silently fall back to making a copy.
I have another question: how do I find what python version a module on pypi requires?
If I try to install module graph in python2.3 it just say that @proprty is invalid synatx.
How can I detect this?
Yahoo! Messenger: chiamate gratuite in tutto il mondo
easy_install doesn't work automatically with elementtree, because its
download link leads to an interstitial HTML page. So I thought I'd add
the proper link to an index I'm keeping:
But easy_install ignores it. It looks fine to me?
Also, if I want to require a package that isn't locatable through PyPI,
how should I deal with that? I can add a find_links value to setup.cfg,
then put the link there. It'd be easier if I could put a URL to the
package somewhere. Or I guess if I could have a sort of local index.
But if I put "./docs/packages.html" as a find_links value, easy_install
can't find that. So, what's the best way to deal with that? I'm okay
with the index page myself, but I think other people may not want to
maintain such a thing. (I'm thinking of writing a little app to create
a index for broken PyPI entries, but that's another topic.)
And lastly on this topic, what should I do when it's likely a package
will be installed without setuptools/easy_install? For something like
ElementTree there are lots of packages (RPM, deb, etc) that install the
package; I can't really ask people to uninstall those and install the
Ian Bicking / ianb(a)colorstudy.com / http://blog.ianbicking.org
After thinking over the last week's distutils-sig discussion about
security, signatures, etc., I think I have a plan for handling basic file
integrity checking and (non-cryptographic) trust management for
EasyInstall. It is not a high-security end-to-end solution, but I think it
will allow security-conscious persons to take a more "locked down" approach
if they want to, while providing everyone else with some baseline
protection against corrupted files.
The first part of the plan is to add md5 digest checking to
EasyInstall. Because one of EasyInstall's design goals is to make it easy
for anybody to publish links to packages, we need to be able to include the
md5 signature in a package's URL. I'm thinking we could achieve this via
an '#md5=...' fragment identifier. For example, a setuptools source
archive URL might be:
The advantage of this approach is that it allows anyone to assert what the
md5 of the targeted file is, and it can be asserted in any web page, just
by pointing an HREF at the file. EasyInstall could detect the '#md5='
marker, and then use this to verify the file during download.
The disadvantage, of course, is that PyPI doesn't currently support this;
it creates a separate link to a page that displays the md5, and that URL
doesn't contain anything that connects it back to the distribution file it
refers to. I could probably create some kind of parsing hack to fix that
for PyPI, but it seems it might be worth adding the #md5 trick to PyPI to
EasyInstall would also need to grow a --require-md5 option, which would
refuse to install anything from a Subversion checkout or a distribution
without a known md5 signature.
In addition to md5 support in EasyInstall, I propose to also add it to
ez_setup; there, however, the md5 values for various distributions will be
hardcoded into ez_setup.py itself. (I'll make my "release" script append
the md5 signatures for new distributions to the end of ez_setup.py.) In
this way, the bootstrap installation of setuptools can also be reasonably
secured, as long as you trust a particular version of ez_setup.py.
The next part of the plan would be to add an --allow-hosts option to
EasyInstall. This would be a list of host wildcards that EasyInstall would
be allowed to contact. For example, --allow-hosts=*.python.org would let
EasyInstall download or scan pages from PyPI or www.python.org, but not
anywhere else. The default, if not specified, would be '*', meaning that
any host may be accessed. If EasyInstall finds itself about to download a
page or distribution from a host that isn't allowed, it will abort with a
message explaining the problem.
This would allow folks like Paul Moore to configure a default --allow-hosts
list in their pydistutils.cfg, to prevent EasyInstall from downloading
things from just any old place on the Internet. Once he's verified that he
trusts a particular site, he can edit pydistutils.cfg and add it, or else
manually download the blocked URL, publish it on a trusted intranet host, etc.
So, this is not a complete security solution, as it doesn't deal with
end-to-end file integrity, and could easily be subverted by taking over a
site somewhere in the middle (e.g. python.org). But until we have more of
the cryptographic infrastructure in place, I think this plan could provide
us with a good starting point. Comments, anyone?
I've just checked in experimental support for lazy, non-empty namespace
packages. And what is that?, you might well ask.
Well, it seems that livinglogic.de distributes certain Python packages
using a distutils kludge that allows a kind of crude namespace package to
exist, without using pkg_resources. Specifically, their 'll-core' project
distributes the 'll' package, and various other projects such as 'll-color'
distribute modules for the 'll' package, but *without including an
__init__.py*. This allows the distutils to install the modules without
overwriting or duplicating the single __init__.py file.
This is an interesting approach to addressing namespace packages in a
pre-setuptools world, but until now it hasn't been really usable with
setuptools, for several reasons:
* pkg_resources only loaded one __init__ file for a given package
* pkg_resources couldn't import modules from a zipfile or directory with
* pkg_resources automatically imports registered namespace packages as
soon as they're discovered, which is not a big deal for empty namespace
packages, but could easily become quite problematic for ones like 'll' that
contain actual code.
So, here's the solution:
* pkg_resources now loads all __init__.py's from all the distributions
that provide contents for the module.
* the bdist_egg command automatically generates a dummy __init__.py for
packages that don't have them. The __init__.py file contains the single line:
This ensures that the other package contents are importable, and that
as soon as an import of the package occurs, it gets registered as a
* projects that use this approach to namespace packaging must *NOT*
pass a 'namespace_packages' argument to setup(), because that would cause
the package to be imported even when it isn't necessary. They must also
put a 'declare_namespace()' call in all of their __init__.py's, to ensure
that the package will become a namespace no matter what the order of
distributions on sys.path is.
Because of the complexity of this approach, I do not recommend it for new
projects. Namespace packages should not have code in any of their
__init__.py files, as it eliminates all of these headaches. So, this is an
experimental feature intended to facilitate backward compatibility
only. Don't expect it to show up in the documentation any time soon!
Anyway, I was able to get this to work for ll-core and ll-color, as long as
I added a 'declare_namespace()' call like the one shown above to ll-core's
__init__.py, and made them both use setuptools. I would appreciate any
feedback that folks have on this.
By the way, because SourceForge CVS updating is slow, it may be a few hours
before you can check out the version of setuptools that supports
this. Make sure you have pkg_resources.py revision 1.69 or higher, and
setuptools/command/bdist_egg.py revision 1.28 or higher, if you want to