In-Python virtualisation and packaging
Back in March this year, Carl Meyer did some work to see if it was feasible to bring virtualenv functionality into Python [1] (code at [2]). Carl's changes were to Python code only, which was almost but not quite enough. I built on his changes with updates to C code in getpath.c/getpathp.c, and my code is at [3]. I've kept it synchronised with the core cpython repo, including the recently committed packaging modules. While there are issues to work through such as dealing with source builds (and no doubt plenty of others), it now seems possible to create virtual environments and install stuff into them using just the stdlib (modulo currently needing Distribute for the packages which don't yet support setup.cfg-based packaging, but it's all done automatically for a user). So you can do e.g. $ python3.3 -m virtualize /tmp/venv $ source /tmp/venv/bin/activate.sh $ pysetup3 install Mako and so on. A log of early experiments, which seems reasonably promising, is at [4]. Do people agree that it may be fitting, proper and timely to bring virtualisation into Python, and are there any fundamental flaws anyone can see with the approach used? If people want to experiment with this code without cloning and building, I created a Debian package using checkinstall, which can be installed using sudo dpkg -i pythonv_3.3-1_i386.deb and removed using sudo dpkg -r pythonv I can make this Debian package available for download, if anyone wants it. Regards, Vinay Sajip [1] http://mail.python.org/pipermail/distutils-sig/2011-March/017519.html [2] https://bitbucket.org/carljm/cpythonv [3] https://bitbucket.org/vinay.sajip/pythonv [4] https://gist.github.com/1022601
On 13/06/2011 12:47, Vinay Sajip wrote:
Back in March this year, Carl Meyer did some work to see if it was feasible to bring virtualenv functionality into Python [1] (code at [2]).
Carl's changes were to Python code only, which was almost but not quite enough. I built on his changes with updates to C code in getpath.c/getpathp.c, and my code is at [3]. I've kept it synchronised with the core cpython repo, including the recently committed packaging modules.
While there are issues to work through such as dealing with source builds (and no doubt plenty of others), it now seems possible to create virtual environments and install stuff into them using just the stdlib (modulo currently needing Distribute for the packages which don't yet support setup.cfg-based packaging, but it's all done automatically for a user). So you can do e.g.
$ python3.3 -m virtualize /tmp/venv $ source /tmp/venv/bin/activate.sh $ pysetup3 install Mako
and so on. A log of early experiments, which seems reasonably promising, is at [4].
Do people agree that it may be fitting, proper and timely to bring virtualisation into Python, and are there any fundamental flaws anyone can see with the approach used?
It would certainly need a PEP. There are two options: Bring the full functionality into the standard library so that Python supports virtual environments out of the box. As is the case with adding anything to the standard library it will then be impossible to add features to the virtualization support in Python 3.3 once 3.3 is released - new features will go into 3.4. Add only the minimal changes required to support a third-party virtual environment tool. Virtual environments are phenomenally useful, so I would support having the full tool in the standard library, but it does raise maintenance and development issues. Don't forget windows support! ;-) All the best, Michael Foord
If people want to experiment with this code without cloning and building, I created a Debian package using checkinstall, which can be installed using
sudo dpkg -i pythonv_3.3-1_i386.deb
and removed using
sudo dpkg -r pythonv
I can make this Debian package available for download, if anyone wants it.
Regards,
Vinay Sajip
[1] http://mail.python.org/pipermail/distutils-sig/2011-March/017519.html [2] https://bitbucket.org/carljm/cpythonv [3] https://bitbucket.org/vinay.sajip/pythonv [4] https://gist.github.com/1022601
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
On Mon, Jun 13, 2011 at 9:55 PM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
Virtual environments are phenomenally useful, so I would support having the full tool in the standard library, but it does raise maintenance and development issues.
Don't forget windows support! ;-)
Given that it is desirable for tools like virtualenv to support *old* versions of Python on *new* versions of operating systems, it seems to me that there is an inherent element of their feature set that makes including the whole tool questionable. OTOH, it may make sense to have a baseline tool provided innately, but provide the appropriate third party hooks to allow alternative tools to evolve independently of the stdlib. How well does the regression test suite cope when run inside such a virtualised environment? Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan <ncoghlan <at> gmail.com> writes:
Given that it is desirable for tools like virtualenv to support *old* versions of Python on *new* versions of operating systems, it seems to me that there is an inherent element of their feature set that makes including the whole tool questionable.
You're right in terms of the current Python ecosystem and 3.x adoption, because of course this approach requires support from Python itself in terms of its site.py code. However, virtual environments have a utility beyond supporting older Pythons on newer OSes, since another common use case is having different library environments sandboxed from each other on different projects, even if all those projects are using Python 3.3+. The virtualenv module does an intricate bootstrapping dance which needs to accommodate each successive Python version, so there's maintenance overhead that way, too. Carl Meyer, being a pip and virtualenv maintainer, will probably have useful views on this.
OTOH, it may make sense to have a baseline tool provided innately, but provide the appropriate third party hooks to allow alternative tools to evolve independently of the stdlib.
Yes - I'm thinking that what I've proposed is the baseline tool, and the question is about what the virtualisation API needs to look like to allow third-party tools to progress independently of the stdlib but in an interoperable way (a bit like packaging, I suppose).
How well does the regression test suite cope when run inside such a virtualised environment?
https://gist.github.com/1022705 325 tests OK. 5 tests failed: test_email test_importlib test_lib2to3 test_packaging test_sysconfig test_importlib might be broken because I accidentally committed some changes to marshal.c while working on #12291. test_packaging fails because of #12313. test_email fails for a similar reason - Makefile.pre.in is missing test_email in LIBSUBDIRS. test_sysconfig is probably failing because of changes I made, and I'm not sure of test_lib2to3. I will investigate! Regards, Vinay Sajip
On Mon, Jun 13, 2011 at 10:50 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
You're right in terms of the current Python ecosystem and 3.x adoption, because of course this approach requires support from Python itself in terms of its site.py code. However, virtual environments have a utility beyond supporting older Pythons on newer OSes, since another common use case is having different library environments sandboxed from each other on different projects, even if all those projects are using Python 3.3+.
Yeah, even if the innate one struggles on later OS releases that changed things in a backwards incompatible way, it will still be valuable on the OS versions that are around at the time that version of Python gets released. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 06/13/2011 08:07 AM, Nick Coghlan wrote:
On Mon, Jun 13, 2011 at 10:50 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
You're right in terms of the current Python ecosystem and 3.x adoption, because of course this approach requires support from Python itself in terms of its site.py code. However, virtual environments have a utility beyond supporting older Pythons on newer OSes, since another common use case is having different library environments sandboxed from each other on different projects, even if all those projects are using Python 3.3+.
Yeah, even if the innate one struggles on later OS releases that changed things in a backwards incompatible way, it will still be valuable on the OS versions that are around at the time that version of Python gets released.
FWIW, historically pretty much every new Python version has broken virtualenv, and new OS versions almost never have. Virtualenv isn't especially OS-dependent (not nearly as much as some other stdlib modules): the most OS-dependent piece is "shell activation", and that's a feature I would prefer to entirely leave out of the stdlib virtualenv (it's a convenience, not a necessity for virtualenv use, and the need to maintain it for a variety of OS shells is a maintenance burden I don't think Python should adopt). In fact, the only new-OS-version adjustment I can recall virtualenv needing to make is when Debian introduced dist-packages -- but even that doesn't really apply, as that was distro-packager change to Python itself. With a built-in virtualenv it would be the distro packagers responsibility to make sure their patch to Python doesn't break the virtualenv module. So I don't think a virtualenv stdlib module would be at all likely to break on a new OS release, if Python itself is not broken by that OS release. (It certainly wouldn't be the stdlib module most likely to be broken by OS changes, in comparison to e.g. shutil, threading...) Carl
On 06/13/2011 06:46 PM, Carl Meyer wrote:
FWIW, historically pretty much every new Python version has broken virtualenv
I should clarify that this is because of the delicate stdlib bootstrapping virtualenv currently has to do; the built-in virtualenv eliminates this entirely and will require much less maintenance for new Python versions. Carl
On 14/06/2011 00:46, Carl Meyer wrote:
[snip...] So I don't think a virtualenv stdlib module would be at all likely to break on a new OS release, if Python itself is not broken by that OS release. (It certainly wouldn't be the stdlib module most likely to be broken by OS changes, in comparison to e.g. shutil, threading...)
And if we gain Carl as a Python committer to help maintain it, then I'd say it is worth doing for that reason alone... Michael
Carl _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
On Jun 14, 2011, at 01:00 AM, Michael Foord wrote:
On 14/06/2011 00:46, Carl Meyer wrote:
[snip...] So I don't think a virtualenv stdlib module would be at all likely to break on a new OS release, if Python itself is not broken by that OS release. (It certainly wouldn't be the stdlib module most likely to be broken by OS changes, in comparison to e.g. shutil, threading...)
And if we gain Carl as a Python committer to help maintain it, then I'd say it is worth doing for that reason alone...
+1 -Barry
On 14.06.2011, at 01:46, Carl Meyer wrote:
On 06/13/2011 08:07 AM, Nick Coghlan wrote:
On Mon, Jun 13, 2011 at 10:50 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
You're right in terms of the current Python ecosystem and 3.x adoption, because of course this approach requires support from Python itself in terms of its site.py code. However, virtual environments have a utility beyond supporting older Pythons on newer OSes, since another common use case is having different library environments sandboxed from each other on different projects, even if all those projects are using Python 3.3+.
Yeah, even if the innate one struggles on later OS releases that changed things in a backwards incompatible way, it will still be valuable on the OS versions that are around at the time that version of Python gets released.
FWIW, historically pretty much every new Python version has broken virtualenv, and new OS versions almost never have. Virtualenv isn't especially OS-dependent (not nearly as much as some other stdlib modules): the most OS-dependent piece is "shell activation", and that's a feature I would prefer to entirely leave out of the stdlib virtualenv (it's a convenience, not a necessity for virtualenv use, and the need to maintain it for a variety of OS shells is a maintenance burden I don't think Python should adopt).
In fact, the only new-OS-version adjustment I can recall virtualenv needing to make is when Debian introduced dist-packages -- but even that doesn't really apply, as that was distro-packager change to Python itself. With a built-in virtualenv it would be the distro packagers responsibility to make sure their patch to Python doesn't break the virtualenv module.
FTR, there is some special casing for Mac OS framework installs included, too. Not sure if that should be considered a stability threatening issue though since Apple hasn't changed much on that layout, AFAIK.
So I don't think a virtualenv stdlib module would be at all likely to break on a new OS release, if Python itself is not broken by that OS release. (It certainly wouldn't be the stdlib module most likely to be broken by OS changes, in comparison to e.g. shutil, threading...)
Jannis
On 14 Jun, 2011, at 11:15, Jannis Leidel wrote:
On 14.06.2011, at 01:46, Carl Meyer wrote:
In fact, the only new-OS-version adjustment I can recall virtualenv needing to make is when Debian introduced dist-packages -- but even that doesn't really apply, as that was distro-packager change to Python itself. With a built-in virtualenv it would be the distro packagers responsibility to make sure their patch to Python doesn't break the virtualenv module.
FTR, there is some special casing for Mac OS framework installs included, too. Not sure if that should be considered a stability threatening issue though since Apple hasn't changed much on that layout, AFAIK.
Apple hasn't changed anything w.r.t. the basic layout of frameworks for a long time, but does mess with the structure of site-packages in their releases of Python. That shouldn't affect this feature though. For the most part a Python.framework is just a unix install stuffed inside framework. The special-case code in virtualenv for frameworks is needed because a framework uses another mechanism to look for sys.prefix than a classical unix install: sys.prefix is the directory that contains the python shared library. There is another feature of a framework install that would be nice to have in a virtualenv: the python and pythonw commands for a framework build are small programs that use execv to start the real interpreter that's stored in a Python.app inside the framework. This is needed to be able to access GUI functionality from the command-line as Apple's GUI frameworks assume they are used by code in an application bundle. Ronald
Nick Coghlan <ncoghlan <at> gmail.com> writes:
How well does the regression test suite cope when run inside such a virtualised environment?
I followed this up, and three tests fail: test_lib2to3, test_packaging and test_sysconfig. These are errors which show up on the default branch too [1][2]; full results are at [3]. I've been keeping the pythonv branch synchronised with default - these results appear to be quite stable/repeatable (old versions of the results are available in the gist at [3]). I did another test: in a pythonv-created environment, I tested pythonv/pysetup3 by trying to install all PyPI packages with a Python 3 trove classifier, where a source distribution can be found. This smoke test shows a total of 398 such packages, 310 of which were installed in the environment without errors. That's 78% - not too bad for this early stage in the game. The details of the failing 88 packages are at [4], and some of these are pysetup3 issues but a fair few are bugs in the packages themselves (e.g. SyntaxErrors in setup.py, or missing readme files that setup.py expects to be there) or missing dependencies like boost.python or similar C-level dependencies. These tests were done with a patched version of Distribute which uses sys.site_prefix is available, falling back to sys.prefix when not (so the Distribute change is backward compatible). Regards, Vinay Sajip [1] http://bugs.python.org/issue12331 [2] http://bugs.python.org/issue9100 [3] https://gist.github.com/1022705 [4] http://gist.github.com/1037662
Michael Foord <fuzzyman <at> voidspace.org.uk> writes:
It would certainly need a PEP.
Of course.
There are two options:
Bring the full functionality into the standard library so that Python supports virtual environments out of the box. As is the case with adding anything to the standard library it will then be impossible to add features to the virtualization support in Python 3.3 once 3.3 is released - new features will go into 3.4.
Add only the minimal changes required to support a third-party virtual environment tool.
Agreed. As I see it, the "minimal changes required" are everything in my fork except for "virtualize.py", which was actually written as an external module "pmv.py" - Poor Man's Virtualenv ;-) Having it as a minimal implementation in the stdlib accords with "batteries included", but even as it stands, virtualize.py does try to cater for customisation. Firstly, there's a virtualizer_factory callable which can be overridden for customisation. That's called to produce a virtualizer, whose virtualize method is called with the target directory. The virtualize() function in virtualize.py just does this set of steps. I can't claim to have thought of everything, but it's a simple API which could have any number of implementations, not just the default one in the Virtualizer class in virtualize.py.
Don't forget windows support!
I haven't. Though I haven't tested the most recent changes on Windows yet, I have tested the basic approach under Windows (the code doesn't rely on symlinks, but rather, copies of executables/DLLs). (All Windows testing so far has admittedly been using source builds rather than via a binary installer.) Regards, Vinay Sajip
On Mon, Jun 13, 2011 at 10:22 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Michael Foord <fuzzyman <at> voidspace.org.uk> writes:
Don't forget windows support!
I haven't. Though I haven't tested the most recent changes on Windows yet, I have tested the basic approach under Windows (the code doesn't rely on symlinks, but rather, copies of executables/DLLs). (All Windows testing so far has admittedly been using source builds rather than via a binary installer.)
You should be able to use symlinks even on Windows these days (although granted they won't on portable media that uses a non-symlink friendly filesystem, regardless of OS). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan <ncoghlan <at> gmail.com> writes:
You should be able to use symlinks even on Windows these days (although granted they won't on portable media that uses a non-symlink friendly filesystem, regardless of OS).
Plus I'm not sure Windows XP supports true symlinks - I think you have to make do with "junctions" a.k.a. "reparse points" which are more shambolic than symbolic ;-) I know symlinks are available on Vista, Windows Server 2008 and later, but XP is still very common. Regards, Vinay Sajip
On Mon, Jun 13, 2011 at 08:42, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
You should be able to use symlinks even on Windows these days (although granted they won't on portable media that uses a non-symlink friendly filesystem, regardless of OS).
Plus I'm not sure Windows XP supports true symlinks - I think you have to make do with "junctions" a.k.a. "reparse points" which are more shambolic than symbolic ;-) I know symlinks are available on Vista, Windows Server 2008 and later, but XP is still very common.
I don't think we have any stdlib support for junctions, although we could certainly add it. In 3.2 we added symlink support for files and directories, which as you say is a Vista and beyond feature.
On 06/13/2011 06:55 AM, Michael Foord wrote:
There are two options:
Bring the full functionality into the standard library so that Python supports virtual environments out of the box. As is the case with adding anything to the standard library it will then be impossible to add features to the virtualization support in Python 3.3 once 3.3 is released - new features will go into 3.4.
I think it's not hard to provide enough hooks to allow third-party tools to extend the virtualenv-creation process, while still having enough code in the stdlib to allow actual creation of virtualenvs. Virtualenv already has very few features, and doesn't get very much by way of new feature requests -- all the UI sugar and significant shell integration goes into Doug Hellmann's virtualenvwrapper, and I wouldn't foresee that changing. IOW, I don't think the maintenance concerns outweigh the benefits of being able to create virtualenvs with an out-of-the-box Python.
Add only the minimal changes required to support a third-party virtual environment tool.
Virtual environments are phenomenally useful, so I would support having the full tool in the standard library, but it does raise maintenance and development issues.
Don't forget windows support! ;-)
All the best,
Michael Foord
On Mon, 13 Jun 2011 11:47:33 +0000 (UTC) Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
$ python3.3 -m virtualize /tmp/venv $ source /tmp/venv/bin/activate.sh $ pysetup3 install Mako
and so on. A log of early experiments, which seems reasonably promising, is at [4].
Do people agree that it may be fitting, proper and timely to bring virtualisation into Python, and are there any fundamental flaws anyone can see with the approach used?
This sounds really great, and definitely needs a PEP so that we can ask many questions :) As a side-note, I think calling it "virtualization" is a recipe for confusion. Regards Antoine.
On Mon, Jun 13, 2011 at 10:57 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
As a side-note, I think calling it "virtualization" is a recipe for confusion.
Indeed, OS level virtualisation pretty much has a lock on that term. "virtual environments" skates close to it but manages to avoid it well enough to avoid confusion. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan <ncoghlan <at> gmail.com> writes:
On Mon, Jun 13, 2011 at 10:57 PM, Antoine Pitrou <solipsis <at> pitrou.net> wrote:
As a side-note, I think calling it "virtualization" is a recipe for confusion.
Indeed, OS level virtualisation pretty much has a lock on that term. "virtual environments" skates close to it but manages to avoid it well enough to avoid confusion.
Or as they involving encapsulating paths and libaries, perhaps we could call them "capsules" ;-) though I think the term virtualenv is pretty entrenched now in the Python community. Regards, Vinay Sajip
On 13Jun2011 13:47, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote: | Nick Coghlan <ncoghlan <at> gmail.com> writes: | | > On Mon, Jun 13, 2011 at 10:57 PM, Antoine Pitrou <solipsis <at> pitrou.net> | > wrote: | > > As a side-note, I think calling it "virtualization" is a recipe for | > > confusion. | > | > Indeed, OS level virtualisation pretty much has a lock on that term. | > "virtual environments" skates close to it but manages to avoid it well | > enough to avoid confusion. | | Or as they involving encapsulating paths and libaries, perhaps we could call | them "capsules" ;-) though I think the term virtualenv is pretty entrenched now | in the Python community. "virtualenv" by all means - we all know what is meant. But "virtualisation" - I also am -1 on that. Indeed, when I started reading this thread my expectation was wrong for that very reason. Same issue with "capsules" (yes I know you weren't serious) - too generic a term, too vague. Cheers, -- Cameron Simpson <cs@zip.com.au> DoD#743 http://www.cskk.ezoshosting.com/cs/ It looked good-natured, she thought; Still it had very long claws and a great many teeth, so she felt it ought to be treated with respect.
On Jun 13, 2011, at 11:47 AM, Vinay Sajip wrote:
Do people agree that it may be fitting, proper and timely to bring virtualisation into Python, and are there any fundamental flaws anyone can see with the approach used?
Yes, absolutely. We'll hash out the details when the PEP is published, and bikeshed on all the terminology, but I really think this would be a very powerful addition to the standard library, so +1. Hopefully, the maintenance issues can be sorted out. Question: how hard would it be to backport the work you've done to Python 3.2? Obviously I'm not saying it should be ported to the official 3.2 branch, but if *someone* were interested in doing so, would it be possible? Sounds like you can almost get there with stdlib changes, but would require a few C changes too (I haven't looked at the diff yet). I'm just wondering if the same API could be made available to Python 3.2 as a third party module. It sounds like "almost, but not quite".
If people want to experiment with this code without cloning and building, I created a Debian package using checkinstall, which can be installed using
sudo dpkg -i pythonv_3.3-1_i386.deb
and removed using
sudo dpkg -r pythonv
I can make this Debian package available for download, if anyone wants it.
Is the Debian packaging branch available too? I'd be happy to throw that in my PPA for Ubuntu users to play with. Cheers, -Barry
Barry Warsaw <barry <at> python.org> writes:
Question: how hard would it be to backport the work you've done to Python 3.2? Obviously I'm not saying it should be ported to the official 3.2 branch, but if *someone* were interested in doing so, would it be possible? Sounds like you can almost get there with stdlib changes, but would require a few C changes too (I haven't looked at the diff yet). I'm just wondering if the same API could be made available to Python 3.2 as a third party module. It sounds like "almost, but not quite".
I think it's feasible - as far as I know, there's nothing 3.3 specific about the changes that were made, other than just happening to be against the default branch. AFAIK the getpath.c/getpathp.c changes will also work on 3.2, as all they do is look for a config file in a specific place and read a path from it if it's there. If it's not there, no biggie. If it's there, it sets up the sys.prefix/sys.exec_prefix values from that path. Possibly Carl's original Python changes would be easier to work from, since the sysconfig stuff has now changed quite a bit because of packaging coming in to cpython. For one thing, the _INSTALL_SCHEMES dict is replaced by reading that data from a config file.
Is the Debian packaging branch available too? I'd be happy to throw that in my PPA for Ubuntu users to play with.
My Debian-packaging-fu is not that good, I'm afraid, so there's no branch for the .deb, as such. I made the package by running make and then sudo checkinstall -D --fstrans=no which takes forever (God knows why - it's many many minutes at 100% CPU) but eventually comes up with the .deb. Regards, Vinay Sajip
On Jun 13, 2011, at 04:00 PM, Vinay Sajip wrote:
My Debian-packaging-fu is not that good, I'm afraid, so there's no branch for the .deb, as such. I made the package by running make and then
sudo checkinstall -D --fstrans=no
which takes forever (God knows why - it's many many minutes at 100% CPU) but eventually comes up with the .deb.
Ah, no I don't think that'll be helpful. I can probably reuse the python3.3 packaging stuff to do a PPA. (It takes that long because it basically does a `make install`.) -Barry
Barry Warsaw <barry <at> python.org> writes:
Ah, no I don't think that'll be helpful. I can probably reuse the python3.3 packaging stuff to do a PPA.
Okay, go for it! Is there a specific tutorial somewhere about making a PPA for Python? (As opposed to more generalised tutorials - or would they be sufficient?)
(It takes that long because it basically does a `make install`.)
I realise that, as well as recording what it's doing, but that part seems to happen fairly quickly. Then it says "Copying files to the temporary directory..." and that part seems to take forever. The whole deb is under 25MB, what could be taking many minutes? Regards, Vinay
On 13Jun2011 17:31, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote: | Barry Warsaw <barry <at> python.org> writes: | > Ah, no I don't think that'll be helpful. I can probably reuse the python3.3 | > packaging stuff to do a PPA. | | Okay, go for it! Is there a specific tutorial somewhere about making a PPA for | Python? (As opposed to more generalised tutorials - or would they be sufficient?) | | > (It takes that long because it basically does a | > `make install`.) | | I realise that, as well as recording what it's doing, but that part seems to | happen fairly quickly. | | Then it says "Copying files to the temporary directory..." and that part seems | to take forever. The whole deb is under 25MB, what could be taking many minutes? [ wild speculation ... ] How does it decide what to copy? If it is a "blind" make-me-a-package tool it may be scanning the whole OS install or something (expensive but linear) and maybe then doing some ghastly O(n^2) changed file comparison. Inefficient comparison stuff leaks into the real world all the time; the Linux kernel installs have a "hardlinks" program which is one of my pet hates for this very reason - it runs over the modules trees looking for identical module files to hard link and if you've got several kernels lying around it is unforgivably slow. Or maybe it loads the package install db into memory and does something expensive to see what's not accounted for. [ end speculation, but nothing useful now follows ... ] Cheers, -- Cameron Simpson <cs@zip.com.au> DoD#743 http://www.cskk.ezoshosting.com/cs/ "He deserves death!" "Deserves it! I daresay he does. And many die that deserve life. Is it in your power to give it to them? Then do not be so quick to deal out death in judgement, for even the very wise may not see all ends." - Gandalf, _The Lord of the Rings_
participants (10)
-
Antoine Pitrou
-
Barry Warsaw
-
Brian Curtin
-
Cameron Simpson
-
Carl Meyer
-
Jannis Leidel
-
Michael Foord
-
Nick Coghlan
-
Ronald Oussoren
-
Vinay Sajip