I have filed a bug report "Building Python 2.4.3 on Solaris 9/10 with Sun
Studio 11" 1496561 in the Python tracker. The problem I have encountered is,
that some of the unit tests of the application roundup fail with Python
producing a segmentation fault and dumping core, if Python was build with Sun
Studio 11 (Sun C 5.8). In fact not only some of the unit tests fail, but also
the application roundup at certain steps.
If gcc is used, everything works fine. As Richard Jones suggests, it might be
a problem in the anydbm module. I would rather prefer to use the native
compiler of a platform. To name only two reason, distributing the application
is easier (dynamic library dependencies are most likely met on the target
system) and Sun is maintaining the reference native libraries.
Help would be appreciated, thanks
Looking at #1153226, I found this:
We introduced emitting a DeprecationWarning for PyArg_ParseTuple
integer arguments if a float was given. This doesn't affect functions
like file.seek which use PyInt_AsLong to parse their argument.
PyInt_AsLong calls the nb_int slot which silently converts floats
Is that acceptable, should PyInt_AsLong not accept other numbers
or should the functions be changed?
I'm seeing a dubious failure of test_gzip and test_tarfile on my AMD64
machine. It's triggered by the recent struct changes, but I'd say it's
probably caused by a bug/misfeature in zlibmodule: zlib.crc32 is the result
of a zlib 'crc32' functioncall, which returns an unsigned long.
zlib.crc32turns that unsigned long into a (signed) Python int, which
means a number
beyond 1<<31 goes negative on 32-bit systems and other systems with 32-bit
longs, but stays positive on systems with 64-bit longs:
The old structmodule coped with that:
>>> struct.pack("<l", -271938108)
>>> struct.pack("<l", 4023029188)
The new one does not:
>>> struct.pack("<l", -271938108)
>>> struct.pack("<l", 4023029188)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "Lib/struct.py", line 63, in pack
struct.error: 'l' format requires -2147483647 <= number <= 2147483647
The structmodule should be fixed (and a test added ;) but I'm also wondering
if zlib shouldn't be fixed. Now, I'm AMD64-centric, so my suggested fix
would be to change the PyInt_FromLong() call to PyLong_FromUnsignedLong(),
making zlib always return positive numbers -- it might break some code on
32-bit platforms, but that code is already broken on 64-bit platforms. But I
guess I'm okay with the long being changed into an actual 32-bit signed
number on 64-bit platforms, too.
Thomas Wouters <thomas(a)python.org>
Hi! I'm a .signature virus! copy me into your .signature file to help me
I try to move this to -dev as I hope there more people reading it who
are competent in internal working :). So please replay to -dev only.
The question is about use of generators in embedde v2.4 with asserts
Can somebody explain, why the code in try2.c works with wrappers 2 and 3
but crashes on buggy exception for all others, that is pure generator
and wrappers 1,4,5 ?
In other words, what does [i in i for gen] do differently than other
ways of iterating over gen, which helps it to avoid the assert, and
also, how could I write something that is still a generator, but does
not trigger the assert.
While the right solution is to just fix python (which is done for v2.5)
we need a workaround for the following reason
We have added support for returning rows (as tuples and or dictionaries
or classes) and sets of boths scalars and rows to PostgreSQL's
pl/plpython embedded language, but have some objections to getting this
into distribution, because there is a bug in python 2.4 doing a wrong
assert and additional bug in how RedHat rpm build process which leaves
the buggy asserts in python.so.
So I hoped to write a simple wrapper class, but it only seems to work,
when the generator is turned into list, which is not a good solution as
it works directly against what generators are good for.
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia
Skype me: callto:hkrosing
Get Skype for free: http://www.skype.com
Some background for those not watching python-checkins:
I neglected to do "svn add" for the new functools Python module when
converting functional->functools. The buildbots stayed green because the
ImportError triggered by the line "import functools" in test_functools was
treated as a TestSkipped by regrtest.py.
Georg noticed the file missing from the checkin message, but this is the
second time I (and the buildbots) have missed a regression due to this
behaviour. (As I recall, last time I checked in some broken code because I
didn't notice the additional entry appearing in the list of unexpected skips
in my local testing)
Tim Peters wrote:
> [Nick Coghlan]
>> ... (we should probably do something about that misleading ImportError ->
>> TestSkipped -> green buildbot behaviour. . . )
> I looked at that briefly a few weeks back and gave up. Seemed the
> sanest thing was to entirely stop treating ImportError as "test
> skipped", and rewrite tests that legimately _may_ get skipped to catch
> expected ImportErrors and change them to TestSkipped themselves.
> A bit of framework might help; e.g., a test that expects to get
> skipped due to failing imports on some platforms could define a
> module-level list bound to a conventional name containing the names of
> the modules whose import failure should be treated as TestSkipped, and
> then regrtest.py could be taught to check import errors against the
> test module's list (if any).
> In the case du jour, test_functools.py presumably wouldn't define that
> list, so that any ImportError it raised would be correctly treated as
> test failure.
What if we appended unexpected skips to the list of bad tests so that they get
rerun in verbose mode and the return value becomes non-zero?
print count(len(surprise), "skip"), \
"unexpected on", plat + ":"
# Add the next line after the previous two in regrtest.py
(This happens after the count of failed tests has been printed, so we don't
affect that output)
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
On 5/30/06, Steven Bethard <steven.bethard(a)gmail.com> wrote:
> On 5/30/06, Brett Cannon <brett(a)python.org> wrote:
> > So, first step in my mind is settling if we want to add one more depth
> > the stdlib, and if so, how we want to group (not specific groupings,
> > general guidelines).
> I think that having a package level that exactly matches the divisions
> in the Library Reference (http://docs.python.org/lib/lib.html ) would
> be great.
Makes sense to me.
Currently, that would mean packages for:
> 3. Python Runtime Services
> 4. String Services
> 5. Miscellaneous Services
I don't think we necessarily want a misc package. Should stuff that falls
into here just be at the root level? Besides, some stuff, such as heapq,
bisect, collections, and the User* modules could got into a data structure
package. I also think that a testing package would make sense. Could also
have a math package.
6. Generic Operating System Services
> 7. Optional Operating System Services
This includes socket, which I would think would belong more in a
networking-centric package (not including web-specific stuff). Plus I
believe a threading/concurrency package has been proposed before (which
included hiding 'thread' so that people wouldn't use such low-level stuff).
8. Unix Specific Services
> 9. The Python Debugger
> 10. The Python Profiler
Can't the pdb and profiling going into a developer package?
11. Internet Protocols and Support
Should xmlrpclib be in here, or in something more in line with RPC and
12. Internet Data Handling
Should we merge this with a more generic Internet/Web package? Have a
separate email package that includes 'email', smtp, etc?
13. Structured Markup Processing Tools
> 14. Multimedia Services
> 15. Cryptographic Services
> 16. Graphical User Interfaces with Tk
> 17. Restricted Execution
=) This section's not really valid anymore (although I will be fixing that
at some point).
18. Python Language Services
> 19. Python compiler package
> 20. SGI IRIX Specific Services
> 21. SunOS Specific Services
> 22. MS Windows Specific Services
I've been starting to get some of the buildbots working again. There
was some massive problem on May 25 where a ton of extra files were
left around. I can't remember if I saw something about that at the
NFS sprint or not.
There is a lingering problem that I can't fix on all the boxes. Namely:
Should we always do that step before we build on the buildbots? The
warning about Setup.dist being newer than Setup is displayed. I can
understand why we wouldn't want to unconditionally overwrite a user's
modified Setup. However, for the buildbots, it seems safer to always
Any objections? Any ideas for the best way to implement this? A
separate BB step for Unix clients? In the makefile?
Martin, I would have fixed it on your Solaris box, but I don't think I
can get access to the buildbot's account.
Using unicode strings with non-ascii chars.
I'm working around this by subclassing OptionParser. Below is a
workaround I use in GNU Solfege. Should something like this be
included in python 2.5?
(Please CC me any answer.)
@@ -30,7 +30,13 @@
-opt_parser = optparse.OptionParser()
+ def print_help(self, file=None):
+ if file is None:
+ file = sys.stdout
+ file.write(self.format_help().encode(file.encoding, 'replace'))
+opt_parser = MyOptionParser()
opt_parser.add_option('-v', '--version', action='store_true', dest='version')
opt_parser.add_option('-w', '--warranty', action='store_true', dest='warranty',
help=_('Show warranty and copyright.'))
Traceback (most recent call last):
File "./solfege.py", line 43, in ?
File "/home/tom/src/solfege-mcnv/src/mainwin.py", line 70, in ?
options, args = opt_parser.parse_args()
File "/usr/lib/python2.3/optparse.py", line 1129, in parse_args
stop = self._process_args(largs, rargs, values)
File "/usr/lib/python2.3/optparse.py", line 1169, in _process_args
File "/usr/lib/python2.3/optparse.py", line 1244, in _process_long_opt
option.process(opt, value, values, self)
File "/usr/lib/python2.3/optparse.py", line 611, in process
File "/usr/lib/python2.3/optparse.py", line 632, in take_action
File "/usr/lib/python2.3/optparse.py", line 1370, in print_help
UnicodeEncodeError: 'ascii' codec can't encode characters in position 200-202: ordinal not in range(128)
Tom Cato Amundsen <tca(a)gnu.org> http://www.solfege.org/
GNU Solfege - free ear training http://www.gnu.org/software/solfege/
I'm working on implementing a socket module for IronPython that aims to
be compatible with the standard CPython module documented at
http://docs.python.org/lib/module-socket.html. I have a few questions
about some corner cases that I've found. CPython results below are from
Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] on
Without further ado, the questions:
* getfqdn(): The module docs specify that if no FQDN can be found,
socket.getfqdn() should return the hostname as returned by
gethostname(). However, CPython seems to return the passed-in hostname
rather than the local machine's hostname (as would be expected from
gethostname()). What's the correct behavior?
>>> s.getfqdn(' asdlfk asdfsadf ')
# expected 'mybox.mydomain.com'
* getfqdn(): The function seems to not always return the FQDN. For
example, if I run the following code from 'mybox.mydomain.com', I get
strange output. Does getfqdn() remove the common domain between my
hostname and the one that I'm looking up?
# expected 'otherbox.mydomain.com'
* gethostbyname_ex(): The CPython implementation doesn't seem to
respect the '' == INADDR_ANY and '<broadcast>' == INADDR_BROADCAST
forms. '' is treated as localhost, and '<broadcast>' raises a "host not
found" error. Is this intentional? A quick check seems to reveal that
gethostbyname() is the only function that respects '' and '<broadcast>'.
Are the docs or the implementation wrong?
* getprotobyname(): Only a few protocols seem to be supported. Why?
>>> for p in [a[8:] for a in dir(socket) if a.startswith('IPPROTO_')]:
... print p,
... print socket.getprotobyname(p)
... except socket.error:
... print "(not handled)"
AH (not handled)
DSTOPTS (not handled)
ESP (not handled)
FRAGMENT (not handled)
HOPOPTS (not handled)
ICMPV6 (not handled)
IDP (not handled)
IGMP (not handled)
IPV4 (not handled)
IPV6 (not handled)
MAX (not handled)
ND (not handled)
NONE (not handled)
RAW (not handled)
ROUTING (not handled)
Thanks for your help!
It seems that we should convert the crc32 functions in binascii,
zlib, etc. to deal with unsigned integers. Currently it seems that 32-
bit and 64-bit platforms are going to have different results for
Should we do the same as the struct module, and do DeprecationWarning
when the input value is < 0? Do we have a PyArg_ParseTuple format
code or a converter that would be suitable for this purpose?
None of the unit tests seem to exercise values where 32-bit and 64-
bit platforms would have differing results, but that's easy enough to