Hi all,
I've been aware that the distutils sig has been simmerring away, but
until recently it has not been directly relevant to what I do.
I like the look of the proposed api, but have one question. Will this
support an installed system that has multiple versions of the same
package installed simultaneously? If not, then this would seem to be a
significant limitation, especially when dependencies between packages
are considered.
Assuming it does, then how will this be achieved? I am presently
managing this with a messy arrangement of symlinks. A package is
installed with its version number in it's name, and a separate
directory is created for an application with links from the
unversioned package name to the versioned one. Then I just set the
pythonpath to this directory.
A sample of what the directory looks like is shown below.
I'm sure there is a better solution that this, and I'm not sure that
this would work under windows anyway (does windows have symlinks?).
So, has this SIG considered such versioning issues yet?
Cheers,
Tim
--------------------------------------------------------------
Tim Docker timd(a)macquarie.com.au
Quantative Applications Division
Macquarie Bank
--------------------------------------------------------------
qad16:qad $ ls -l lib/python/
total 110
drwxr-xr-x 2 mts mts 512 Nov 11 11:23 1.1
-r--r----- 1 root mts 45172 Sep 1 1998 cdrmodule_0_7_1.so
drwxr-xr-x 2 mts mts 512 Sep 1 1998 chart_1_1
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Fnorb_0_7_1
dr-xr-x--- 3 mts mts 512 Nov 11 11:21 Fnorb_0_8
drwxr-xr-x 3 mts mts 1536 Mar 3 12:45 mts_1_1
dr-xr-x--- 7 mts mts 512 Nov 11 11:22 OpenGL_1_5_1
dr-xr-x--- 2 mts mts 1024 Nov 11 11:23 PIL_0_3
drwxr-xr-x 3 mts mts 512 Sep 1 1998 Pmw_0_7
dr-xr-x--- 2 mts mts 512 Nov 11 11:21 v3d_1_1
qad16:qad $ ls -l lib/python/1.1
total 30
lrwxrwxrwx 1 root other 29 Apr 10 10:43 _glumodule.so -> ../OpenGL_1_5_1/_glumodule.so
lrwxrwxrwx 1 root other 30 Apr 10 10:43 _glutmodule.so -> ../OpenGL_1_5_1/_glutmodule.so
lrwxrwxrwx 1 root other 22 Apr 10 10:43 _imaging.so -> ../PIL_0_3/_imaging.so
lrwxrwxrwx 1 root other 36 Apr 10 10:43 _opengl_nummodule.so -> ../OpenGL_1_5_1/_opengl_nummodule.so
lrwxrwxrwx 1 root other 27 Apr 10 10:43 _tkinter.so -> ../OpenGL_1_5_1/_tkinter.so
lrwxrwxrwx 1 mts mts 21 Apr 10 10:43 cdrmodule.so -> ../cdrmodule_0_7_1.so
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 chart -> ../chart_1_1
lrwxrwxrwx 1 root other 12 Apr 10 10:43 Fnorb -> ../Fnorb_0_8
lrwxrwxrwx 1 mts mts 12 Apr 10 10:43 mts -> ../mts_1_1
lrwxrwxrwx 1 root other 15 Apr 10 10:43 OpenGL -> ../OpenGL_1_5_1
lrwxrwxrwx 1 root other 33 Apr 10 10:43 opengltrmodule.so -> ../OpenGL_1_5_1/opengltrmodule.so
lrwxrwxrwx 1 root other 33 Apr 10 10:43 openglutil_num.so -> ../OpenGL_1_5_1/openglutil_num.so
lrwxrwxrwx 1 root other 10 Apr 10 10:43 PIL -> ../PIL_0_3
lrwxrwxrwx 1 mts mts 10 Apr 10 10:43 Pmw -> ../Pmw_0_7
lrwxrwxrwx 1 root other 10 Apr 10 10:43 v3d -> ../v3d_1_1
Hi there,
In the lxml project (http://codespeak.net/lxml), we've just noticed the
following problem with lxml eggs: you can easy_install an egg that won't
work for your Python.
This is because Python can be compiled with either 2 or 4 bytes unicode
as its internal representation. Any egg that contains compiled C code
that uses unicode such as lxml will run into trouble: if it's compiled
with a 4 bytes unicode Python, it won't work on a 2 bytes unicode
Python, and vice versa.
This problem is fairly common in Linux. Many distributions such as
Ubuntu and Fedora compile their python with 4 bytes unicode internal
representation. If you compile a Python interpreter by hand it defaults
to 2 bytes unicode, however. Hand-building a Python interpreter is done
fairly commonly by Linux sysadmins for various reasons.
It would therefore be very nice if it became possible to make eggs for
the different unicode compilation options of Python. This configuration
dimension is a real world issue for any binary Python module that does
anything with unicode text..
In an earlier mail to this list:
http://mail.python.org/pipermail/distutils-sig/2005-October/005222.html
M.-A. Lemburg and Phillip Eby had the following discussion:
[MAL]
>>Please make sure that your eggs catch all possible
>>Python binary build dimensions:
>>
>>* Python version
>>* Python Unicode variant (UCS2, UCS4)
>>* OS name
>>* OS version
>>* Platform architecture (e.g. 32-bit vs. 64-bit)
[PJE]
>As far as I know, all of this except the Unicode variant is captured in
>distutils' get_platform(). And if it's not, it should be, since it
>affects any other kind of bdist mechanism.
I'm not sure whether this means this needs to be escalated from
setuptools to the Python interpreter level itself. With this mail, I've
done the job escalating this lxml problem to what appears to be the
right place, though. :)
Thanks,
Martijn
Hello,
I have created a setup.py file for distirbution and I bumped into
a small bug when i tried to set my name in the contact field (Tarek Ziadé)
Using string (utf8 file):
setup(
maintainer="Tarek Ziadé"
)
leads to:
File
"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/command/register.py",
line 162, in send_metadata
auth)
File
"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/command/register.py",
line 257, in post_to_server
value = unicode(value).encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 10:
ordinal not in range(128)
Using unicode:
setup(
maintainer=u"Tarek Ziadé"
)
leads to:
File
"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py",
line 1094, in write_pkg_file
file.write('Author: %s\n' % self.get_contact() )
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position
18: ordinal not in range(128)
I would propose a patch for this problem but i don't know what would be the
best input (i guess unicode
for names)
Regards
Tarek
--
Tarek Ziadé | Association AfPy | www.afpy.org
Blog FR | http://programmation-python.org
Blog EN | http://tarekziade.wordpress.com/
Hi all,
I hope the cross-post is appropriate.
I've started playing with getting the pywin32 extensions building under
the AMD64 architecture. I started building with Visual Studio 8 (it was
what I had handy) and I struck a few issues relating to the compiler version
that I thought worth sharing.
* In trying to build x64 from a 32bit VS7 (ie, cross-compiling via the
PCBuild directory), the python.exe project fails with:
pythoncore fatal error LNK1112: module machine type 'X86' conflicts with
target machine type 'AMD64'
is this a known issue, or am I doing something wrong?
* The PCBuild8 project files appear to work without modification (I only
tried native compilation here though, not a cross-compile) - however, unlike
the PCBuild directory, they place all binaries in a 'PCBuild8/x64'
directory. While this means that its possible to build for multiple
architectures from the same source tree, it makes life harder for tools like
'distutils' - eg, distutils already knows about the 'PCBuild' directory, but
it knows nothing about either PCBuild8 or PCBuild8/x64.
A number of other build processes also know to look inside a PCBuild
directory (eg, Mozilla), so instead of formalizing PCBuild8, I think we
should merge PCBuild8 into PCBuild. This could mean PCBuild/vs7 and
PCBuild/vs8 directories with the "project" files, but binaries would still
be generated in the 'PCBuild' (or PCBuild/x64) directory. This would mean
the same tree isn't capable of hosting 2 builds from different VS compilers,
but I think that is reasonable (if it's a problem, just have multiple source
directories). I understand that PCBuild8 is not "official", but in the
assumption that future versions of Python will use a compiler later than
VS7, it makes sense to me to clean this up now - what are others opinions on
this?
* Re the x64 directory used by the PCBuild8 process. IMO, it makes sense to
generate x64 binaries to their own directory - my expectation is that
cross-compiling between platforms is a reasonable use-case, and we should
support multiple achitectures for the same compiler version. This would
mean formalizing the x64 directory in both 'PCBuild' and distutils, and
leaving other external build processes to update as they support x64 builds.
Does this make sense? Would this fatally break other scripts used for
packaging (eg, the MSI framework)?
* Wide characters in VS8: PC/pyconfig.h defines PY_UNICODE_TYPE as 'unsigned
short', which corresponds with both 'WCHAR' and 'wchar' in previous compiler
versions. VS8 defines this as wchar_t, which I'm struggling to find a
formal definition for beyond being 2 bytes. My problem is that code which
assumes a 'Py_UNICODE *' could be used in place of a 'WCHAR *' now fails. I
believe the intent on Windows has always been "PyUNICODE == 'native
unicode'" - should PC/pyconfig.h reflect this (ie, should pyconfig.h grow a
version specific definition of PyUNICODE as wchar_t)?
* Finally, as something from left-field which may well take 12 months or
more to pull off - but would there be any interest to moving the Windows
build process to a cygwin environment based on the existing autoconf
scripts? I know a couple of projects are doing this successfully, including
Mozilla, so it has precendent. It does impose a greater burden on people
trying to build on Windows, but I'd suggest that in recent times, many
people who are likely to want to build Python on Windows are already likely
to have a cygwin environment. Simpler mingw builds and nuking MSVC specific
build stuff are among the advantages this would bring. It is not worth
adding this as "yet another windows build option" - so IMO it is only worth
progressing with if it became the "blessed" build process for windows - if
there is support for this, I'll work on it as the opportunity presents
itself...
I'm (obviously) only suggesting we do this on the trunk and am happy to make
all agreed changes - but I welcome all suggestions or critisisms of this
endeavour...
Cheers,
Mark
The pywin32 extensions require (well, prefer) administrative access during
installation - certain files are copied to the System32 directory and the
registry at HKEY_LOCAL_MACHINE is written to. Also, if I understand
correctly, if Python happened to be installed into "\Program Files", admin
access would be required to create any files in that directory tree - I'm
not sure what permissions the \PythonXX directory are created with, but its
not unreasable to assume that some shops might choose to secure that
directory similarly to "\Program Files".
The simplest way to achieve this for bdist_wininst installations is to
include some magic in a "manifest". I've confirmed that once this magic is
added, programs created by bdist_wininst get the little "shield" icon
overlay and prompt for elevation before starting the executable. A problem
here is that not all installations will require admin access - eg, a user
who installed Python just for themselves will not need elevation to install
an extension. A solution here would be for the installer to *not* be marked
as requiring elevation, then sniffing the registry to make an educated guess
(eg, HKLM\Software\Python\PythonCore\2.5 existing could indicate admin
access is required). If it finds elevation is required, it would spawn
another copy of itself with elevation requested, and terminate. This will
have a side-effect of meaning the installer never gets the "shield" overlay,
meaning the user will not be expecting to be prompted - but that is
something we can live with.
However, there is a complication here. Any "pure python" packages are not
tied to a particular Python version, so the user can choose what installed
Python version to use. Hence, in the general case, we can only determine if
admin is required after the UI has opened and the user has selected the
Python version. Arranging for the new "elevated" child process at this
point will be (a) ugly, as the UI will vanish before the child process
displays its GUI and (b) would require additional command-line processing
logic - ie, passing the target directory to the child process. If we could
make the determination *before* the GUI opens, it would appear seamless and
would not require special command-line handling (the child process just does
everything)
So I see a few alternatives, but none are very desirable:
* Only make this "admin required" check if a specific version of Python is
necessary (ie, the package contains extension modules). This would leave
pure-python packages out in the cold.
* Live with the ugly UI that would result in performing that check after the
Python version has been selected, and add the command-line processing
necessary to make this work.
* Ignore the issue and try to educate people that they must explicitly use
"Run as Administrator" for such packages on Vista.
I'm wondering if anyone has any opinions or thoughts on how we should handle
this?
Cheers,
Mark
Joshua Boverhof previously reported this in
<http://mail.python.org/pipermail/distutils-sig/2007-March/007436.html>.
If you specify a package in setup_requires it will be built in the
current directory. But even if it is in install_requires as well, it
won't be installed because the requirement is already satisfied at setup
time by the package in the build directory.
Is there a workaround for this?
It looks like this fix to easy_isntall:
'''
0.6c5
* Fixed .dll files on Cygwin not having executable permisions
when an egg is installed unzipped.
'''
introduced a minor bug. From 0.6c5 on, installing an egg unzipped makes
all the files executable.
Attaching a patch that only makes .dll's executable.
-Toshio
Hi
I've been trying to install a sdist tarball of a django svn snapshot
using easy_install but some of the packages data files (the admin
contrib app templates) aren't being installed. I've looked at the
easy_install web page and can't see anything that may help me install
these data files.
I'm doing this as part of a deployment script that uses easy_install to
install a web apps dependencies into a particular directory.
Is this a misuse of easy install? django's setup.py doesn't use
setuptools and so easy_install seems to be creating an sdist_egg on the
fly. I'm working round this by just scripting untarring, & "setup.py
install" etc. directly which is ok but it would be simpler if i could
just call easy_install for all my apps dependencies.
thanks,
Graham
I'm using zc,buildout to manage the development of a very simple web
application. I'd like to have a simple test runner that runs my
doctests, but have run into a slight problem. I'm using
wsgi_intercept to aid in testing, but unfortunately wsgi_intercept
isn't a proper Python package ("yet", according to Titus). So I have
a package-i-fied version in my svn repository and in my buildout.cfg I
specify:
develop = . ./wsgi_intercept_
I initially was using the "interpreter" option to get a Python
interpreter with my eggs on the path, and running "./bin/python
setup.py test". That, however, required me to list wsgi_intercept as
a package requirement, when I really want to list it as a testing
requirement. Moving it to a testing requirement caused it's develop
egg not to be on the PYTHONPATH, so setuptools tries to find it.
Which it couldn't.
I tried using zc.recipe.testbrowser, thinking that maybe it'd look at
the tests_require for the target eggs, but no such luck. The
testrunner also seems pretty promiscuous in looking for things to test
(it tries to import my eggs directory and test them which predictably
doesn't work), but that's another story.
So... any suggestions on using zc.buildout along with testing
dependencies (generally or with zc.recipe.testrunner)?
Thanks,
Nathan