The company I work for has an IBM P-690 server that is in the process of
being retired. It is still a viable server, and has seen almost 0 use (it
was our failover machine). Unfortunately for us, this machine has little to
no resale value, and will probably be junked. I'd rather it go to a good
home, and having taken advantage of the work of the python development
community for a number of years (we use python extensively in system admin
and database work), I saw this as an opportunity to give back a little.
So, If anyone is interested in this machine, please let me know. We are
looking at perhaps a November time frame for when it will be removed from
our remote site. The P690 is no small machine, it is the size of a full rack
and has 32 Power4 processors in it and takes (I believe) 2 or 3 phase 220
Volt power. It weighs nearly a ton. We are running AIX5.3 on it, but I
believe that the machine is capable of running a PowerPC flavor of Linux as
well. This would make a great test machine for python HPC modules or as a
community box where developers could test their code against a PowerPC
architecture. It has lots of life left and I'd rather see it put to use then
On Thu, Aug 19, 2010 at 12:47 AM, Éric Araujo <merwok(a)netwok.org> wrote:
> Let’s turn one error into an occasion for learning:
>> Manually merge r84187
> I was bad with numbers and actually ran svnmerge merge -r 81417, which
> did nothing. Since I have manually merged now, do I have to update the
> bookkeeping information manually? My understanding of the dev FAQ is:
> svnmerge block -r 84187. Is that right?
What I do is :
4 cd /the/right/branch/or/trunk
$ svn ci -m 'comment'
you get a revision number
$ cd py3k
$ svn up
$ svnmerge.py merge -r revision
$ run the tests
$ svn ci -F svn<tab> (there's a svn*.txt file generated by the
svnmerge tool, don't do a manual comment)
Then I apply the same in all branches. Notice that if you merge
something to py3k, the merge to the 3.x
release branch is done with the revision number of the py3k commit,
not the original one.
And I use "svnmerge block -r revision" for branches where the commit
should not be applied, don't forget to do this.
(same revision number cascading applies)
Let me know if you have any other issue
> Thank you.
> Python-checkins mailing list
Tarek Ziadé | http://ziade.org
Le jeudi 19 août 2010 19:43:15, amaury.forgeotdarc a écrit :
> Author: amaury.forgeotdarc
> Date: Thu Aug 19 19:43:15 2010
> New Revision: 84209
> Check the return values for all functions returning an ast node.
> Failure to do it may result in strange error messages or even crashes,
> in admittedly convoluted cases that are normally syntax errors, like:
> def f(*xx, __debug__): pass
Would it be possible to write tests for this change?
I am getting some unexpected behavior in Python 2.6.4 on a WinXP SP3 box.
If I run the following:
from pylab import randint
for s in range(100):
I get 100 zeroes.
If I import randint from random instead, I get the expected behavior
of a random distribution of 1s and 0s.
I found this by importing * from pylab after importing randint from random.
What is going on? Is pylab's randint function broken somehow? Could
this be due to installing scipy into a 2.6 environment when it was
designed for the 2.5 environment?
On Thu, 19 Aug 2010 19:10:19 +0200 (CEST)
victor.stinner <python-checkins(a)python.org> wrote:
> Author: victor.stinner
> Date: Thu Aug 19 19:10:18 2010
> New Revision: 84204
> Fix os.get_exec_path() (code and tests) for python -bb
> Catch BytesWarning exceptions.
You should not catch warnings, but silence them using constructs
provided by the warnings module:
# the rest of your code
Otherwise you'll get buggy behaviour where e.g. env[b'PATH'] raises
BytesWarning because of an unicode key, but it would have succeeded
I've discovered a slightly surprising thing about the way
AST objects for slices are constructed. According to
Python.asdl, all three parts of a slice are optional:
slice = Slice(expr? lower, expr? upper, expr? step)
But that's not quite the way the parser sees things:
Python 3.1.2 (r312:79147, Aug 19 2010, 20:26:20)
[GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ast
>>> t = ast.parse("x[::]", mode='eval')
slice=Slice(lower=None, upper=None, step=Name(id='None', ctx=Load())), ctx=Load()))"
In other words,
is being parsed as though it had been written
Is there a good reason for an omitted third slice
argument being treated differently from the others?
Could (and should) the online Python 3.1 docs be updated to show Python
2.7 as stable?
All the best,
-------- Original Message --------
Subject: Old link text in documentation
Date: Sun, 15 Aug 2010 15:49:34 -0700
From: Aaron DeVore <aaron.devore(a)gmail.com>
The link text at http://docs.python.org/py3k/ under "Docs for other
versions" still describes 2.7 as being "in development"
2010/8/17 martin.v.loewis <python-checkins(a)python.org>:
> Author: martin.v.loewis
> Date: Wed Aug 18 00:58:42 2010
> New Revision: 84166
> Add Ask Solem.
> Modified: python/branches/py3k/Misc/developers.txt
> --- python/branches/py3k/Misc/developers.txt (original)
> +++ python/branches/py3k/Misc/developers.txt Wed Aug 18 00:58:42 2010
> @@ -20,6 +20,10 @@
> Permissions History
> +- Ask Solem was given commit access on Aug 17 2010 by MvL,
> + on recommendation by Jesse Noller, for work on the subprocess
> + library.
IIRC it was multiprocessing.
Is there any proposal to accommodate having parallel-installed multiple
versions of modules?
I have client code in multiple projects using version x.y of a C-compiled
I want to test a new version x.z of module A, but all client software needs
to be recompiled against the new version. If I just install the module, all
the other client software breaks.
I know I could test using virtualenv, but there would be a lot of modules to
install into virtualenv to run the tests, so this would be cumbersome. I'd
prefer to have multiple version co-exist so I could update projects to the
new version at my convenience.
How does this situation happen? I have lots of c++ code using pyublas,
which allows c++ code written to the boost::ublas interface to operate on
numpy vectors/matrixes. pyublas is built against boost libs. pyublas
installs a module, whose purpose is to register conversions.
When I update boost libs, I have to rebuild pyublas and install the updated
module. Then rebuild my client software modules. If pyublas is built
against a different boost version than my client modules, the conversions