It seems that recently, a number of merges broke in the sense
that files added to the trunk were not merged into the
Is that a general problem with svnmerge? Should that be
fixed to automatically do a "svn add" when merging changes
that included file additions and removals?
At 05:55 PM 3/20/2008 +0000, Paul Moore wrote:
>It's not that I object to the existence of automatic dependency
>management, I object to being given no choice, as if my preference for
>handling things manually is unacceptable.
Note that easy_install has a --no-deps option, and you can make it
the default in your distutils.cfg file.
Also, setuptools-based packages *can* build bdist_wininst
installers. (In fact, if memory serves, I added that feature at your request.)
Personally, I'm not very thrilled with the number of complaints on
this thread that could be resolved by RTFMing. There are extensive
manuals, and they do contain the information that some people are
saying isn't there. In several cases that I've seen here today
alone, there are actually *entries in the tables of contents* that
name the precise thing people here are characterizing as undocumented
or even *impossible*, like:
* Making your package available for EasyInstall
* Installing on Un-networked Machines
* Custom Installation Locations
* Restricting Downloads with --allow-hosts
It's easy to get the impression that people not only didn't RTFM,
they didn't even Read The Friendly Table Of Contents of the said
M. Nor, when, they found something in the manual that they didn't
understand, write to the distutils-sig to ask anybody to explain, and
perhaps suggest ways the FM's could be improved.
Under the instruction of Martin, I've made some small changes to 2to3
so keeps track of which fixers act on which level of node. The
speedup isn't too shabby: running on the example file, processing
time went from 9 to 7 seconds, and the test suite dropped from 400 to
I have attached the hacky, ugly, proof-of-concept patch to http://
If there's no reason not to implement this sort of thing, I'll clean
it up and commit it when I get home (or something).
David Wolever - http://wolever.net/~wolever
AIM: davidswolever MSN: david(a)wolever.net
"Without payment you have received; without payment you are to give."
(Mat 10:8 ISV)
On Wed, Mar 19, 2008 at 10:42 AM, Stefan Ring <s.r(a)visotech.at> wrote:
> On Mar 19, 2008 05:24 PM, Adam Olsen <rhamph(a)gmail.com> wrote:
> > On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring <s.r(a)visotech.at> wrote:
> > > Adam Olsen <rhamph <at> gmail.com> writes:
> > >
> > > > Can you try with a call to sched_yield(), rather than nanosleep()?
> > > > It
> > > > should have the same benefit but without as much performance hit.
> > > >
> > > > If it works, but is still too much hit, try tuning the
> > > > checkinterval
> > > > to see if you can find an acceptable throughput/responsiveness
> > > > balance.
> > > >
> > >
> > > I tried that, and it had no effect whatsoever. I suppose it would
> > > make an effect
> > > on a single CPU or an otherwise heavily loaded SMP system but that's
> > > not the
> > > secnario we care about.
> > So you've got a lightly loaded SMP system? Multiple threads all
> > blocked on the GIL, multiple CPUs to run them, but only one CPU is
> > active? I that case I can imagine how sched_yield() might finish
> > before the other CPUs wake up a thread.
> > A FIFO scheduler would be the right thing here, but it's only a short
> > term solution. Care for a long term solution? ;)
> > http://code.google.com/p/python-safethread/
> I've already seen that but it would not help us in our current
> situation. The performance penalty really is too heavy. Our system is
> slow enough already ;). And it would be very difficult bordering on
> impossible to parallelize Plus, I can imagine that all extension modules
> (and our own code) would have to be adapted.
> The FIFO scheduler is perfect for us because the load is typically quite
> low. It's mostly at those times when someone runs a lengthy calculation
> that all other users suffer greatly increased response times.
So you want responsiveness when idle but throughput when busy?
Are those calculations primarily python code, or does a C library do
the grunt work? If it's a C library you shouldn't be affected by
safethread's increased overhead.
Adam Olsen, aka Rhamphoryncus
I'm reviving a very old thread based on discussions with Martin at pycon.
> Sent: Monday, 23 July 2007 5:12 PM
> Subject: Re: [Distutils] distutils.util.get_platform() for Windows
Rather than forcing everyone to read the context, allow me to summarize:
On 64bit Windows versions, we need a "string" that identifies the
platform, and this string should ideally be used consistently. This
original thread related to the files created by distutils (eg,
pywin32-210.win???64??-py2.6.exe) but it seems obvious that we should be
consistent wherever Python wants to display the platform (eg, in the
startup banner, in platform.py, etc).
In the old thread, there was a semi-consensus that 'x86_64' be used by
distutils (and indeed, Lib/distutils/util.py in get_platform() has been
changed, by me, to use this string), but the Python 'banner', for example,
reports AMD64. Platform.py doesn't report much at all in this area, at
least when pywin32 isn't installed, but it arguably should.
Both Martin and I prefer AMD64 as the string, for various reasons.
Firstly, it is less ugly than 'x86_64', and doesn't include an '_'/'-'
which might tend to confuse parsing by humans or computers. Martin also
made the point that AMD invented the architecture and AMD64 is their
preferred name, so we should respect that.
So, at the risk of painting a bike-shed, I'd like to propose that we adopt
'AMD64' in distutils (needs a change), platform.py (needs a change to use
sys.getwindowsversion() in preference to pywin32, if possible, anyway),
and the Python banner (which already uses AMD64).
Any objections? Any strong feelings that using 'AMD' will confuse people
with Intel processors? Strong feelings about the parsability of the name
(PJE? <wink>)? Strong feelings about the color <wink>?
This flag is exposed to python code as sys.flags.py3k_warning
So the hack added to some of the test code that I saw go by on
python-checkins isn't needed :)
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Great idea! Sounds like a PEP (informational, probably) would be good idea.
On Tue, Mar 18, 2008 at 4:59 PM, Bill Janssen <janssen(a)parc.com> wrote:
> I don't think this is bike-shedding.
> The debate about "AMD64" vs. "amd64" vs. "x86_64" reminded me that
> I've been bit more and more frequently by bits of platform-specific
> knowledge scattered around the standard library. The latest is the
> code in distutils.unixccompiler that tries to figure out what flags to
> send to the linker in order to add a dynamic library path lookup to a
> shared library.
> There are lots of different ways of figuring out which platform is
> being used, and they're all over the place. The code in
> distutils.unixccompiler uses "sys.platform[:6]", and looks for
> "darwin"; the code in urllib.py uses "os.name", and expects it to be
> "mac", but later looks for "sys.platform == 'darwin'; posixfile
> believes that platforms come with version numbers ("linux2", "aix4");
> pydoc and tarfile have tests for "sys.platform == 'mac'". tempfile
> looks at os.sys.platform *and* os.name.
> Could well be that all of these are correct (I believe that "mac", for
> instance, refers to the generations of the Mac OS before OS X). But
> it means that when someone tries to update "Python" to a new major
> version release for, say, OS X, it's really easy to miss things. And
> the fact that the platform version is sometimes included and sometimes
> not is also problematic; darwin9 is different from darwin8 in some
> important aspects.
> It would be nice to
> (a) come up with a list of standard platform symbols,
> (b) put those symbols somewhere in the documentation,
> (c) have one way of discovering them, via sys or os or platform or
> whichever module,
> (d) add a standard way of discovering the platform version, and
> (e) stamp out all the other ways that have crept in over the years.
> Python-Dev mailing list
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
--Guido van Rossum (home page: http://www.python.org/~guido/)
I just wanted to throw in a quick note that this package:
which was just uploaded by Daniel Krech, is a lot closer in spirit to
what I was trying to accomplish with PEP 365 than Guido's bootstrap
proposal. Perhaps there's room for both in the stdlib? (And note
that even though the examples use eggs, it does not do anything
egg-specific; any zipfile importable by Python works with autoinstall.)
There are a number of changes I would suggest making to autoinstall,
like making it possible to access information about files in the
cache, supporting non-toplevel modules, programmatic and
environment-level control of the cache directory, that sort of
thing. Heck, it'd be nice (although not essential) for it to support
finding the right URL from PyPI.
I also suspect that users might want to have some way to disable it
or restrict it to certain hosts, etc., perhaps through a
configuration file. It should probably also default the cache to a
temporary directory, in the absence of other input.
(Experience with pkg_resources' caching approach suggests that using
the current directory or a home-directory-based location by default
was a bad idea, at least without a fallback to a tempdir on write failure.)
At the moment, fix_print.py does the Right Thing when it finds ``from
__future__ import print_function``... But the 2to3 parser gets upset
when print() is passed kwargs:
$ cat x.py
from __future__ import print_function
print("Hello, world!", end=' ')
$ 2to3 x.py
RefactoringTool: Can't parse x.py: ParseError: bad input: type=22,
value='=', context=('', (2, 26))
What would be the best way to start fixing this?
#2412 is the related bug.