Hello, currently the only cheap way for developers to test on SPARC that I'm aware of is using this old Debian qemu image: http://people.debian.org/~aurel32/qemu/sparc/ That image still uses linuxthreads and may contain any number of platform bugs. It is currently impossible to run the test suite without bus errors, see: http://bugs.python.org/issue15589 Could someone with access to a SPARC machine (perhaps with a modern version of Debian-sparc) grab a clone from http://hg.python.org/cpython/ and run the test suite? Also, it would be really nice if someone could donate a SPARC buildbot. Stefan Krah
Could someone with access to a SPARC machine (perhaps with a modern version of Debian-sparc) grab a clone from http://hg.python.org/cpython/ and run the test suite?
I'd invoke the "scratch your own itch" principle here. SPARC, these days, is a "minority platform"; I wouldn't mind deleting all SPARC support from Python in some upcoming release. In no way I feel obliged to take efforts that Python 3.3 works on SPARC (and remember that it was me who donated the first buildbot slave, and that was a SPARC machine - which I now had to take down, ten years later). Of course, when somebody has access to SPARC hardware, *and* they have some interest that Python 3.3 works on it, they should test it. But testing it as a favor to the community is IMO irrelevant now; that particular community is shrinking rapidly. What I personally really never cared about is SparcLinux; if sparc, then it ought to be Solaris. IOW: if it breaks, no big deal. Someone may or may not contribute a patch. Regards, Martin
Le 08/08/2012 15:25, "Martin v. Löwis" a écrit :
Of course, when somebody has access to SPARC hardware, *and* they have some interest that Python 3.3 works on it, they should test it. But testing it as a favor to the community is IMO irrelevant now; that particular community is shrinking rapidly.
What I personally really never cared about is SparcLinux; if sparc, then it ought to be Solaris.
What Martin said; SPARC under Linux is probably a hobbyist platform. Enterprise users of Solaris SPARC systems can still volunteer to provide and maintain a buildslave. Regards Antoine.
On 8 August 2012 18:56, Antoine Pitrou
Le 08/08/2012 15:25, "Martin v. Löwis" a écrit :
Of course, when somebody has access to SPARC hardware, *and* they have some interest that Python 3.3 works on it, they should test it. But testing it as a favor to the community is IMO irrelevant now; that particular community is shrinking rapidly.
What I personally really never cared about is SparcLinux; if sparc, then it ought to be Solaris.
What Martin said; SPARC under Linux is probably a hobbyist platform. Enterprise users of Solaris SPARC systems can still volunteer to provide and maintain a buildslave.
Is http://wiki.python.org/moin/BuildBot the relevant documentation? It still seems to refer to subversion, I presume that is no longer needed and just mercurial will do? I've set up a blank solaris 10 zone on a sparc T1000 with the OpenCSW toolchain (gcc 4.6.3) on our server and installed buildslave. According to the instructions this is the point where I ask for a slave name and password. Also, would it make sense to support OpenCSW more out of the box? Currently we carry some patches for setup.py in order to pick up e.g. sqlite from /opt/csw etc. Would there be an interest in supporting this? Regards, Floris
Le 09/08/2012 01:26, Floris Bruynooghe a écrit :
What Martin said; SPARC under Linux is probably a hobbyist platform. Enterprise users of Solaris SPARC systems can still volunteer to provide and maintain a buildslave.
Is http://wiki.python.org/moin/BuildBot the relevant documentation?
Yes, it is, but parts of it may be out of date. Please amend the instructions where necessary :-)
It still seems to refer to subversion, I presume that is no longer needed and just mercurial will do?
True.
I've set up a blank solaris 10 zone on a sparc T1000 with the OpenCSW toolchain (gcc 4.6.3) on our server and installed buildslave. According to the instructions this is the point where I ask for a slave name and password.
Ok, I'll send you one in a couple of days (away from Paris right now).
Also, would it make sense to support OpenCSW more out of the box? Currently we carry some patches for setup.py in order to pick up e.g. sqlite from /opt/csw etc. Would there be an interest in supporting this?
I don't know, what is OpenCSW? I think the answer also depends on the complexity of said patches. Regards Antoine.
Am 09.08.12 01:26, schrieb Floris Bruynooghe:
According to the instructions this is the point where I ask for a slave name and password.
Sent in a private message.
Also, would it make sense to support OpenCSW more out of the box? Currently we carry some patches for setup.py in order to pick up e.g. sqlite from /opt/csw etc. Would there be an interest in supporting this?
If all that needs to be done is to add /opt/csw into search lists where a search list already exists, I see no problem doing so - except that this could be considered a new feature, so it might be only possible to do it for 3.4. If the patches are more involved, we would have to consider them on a case-by-case basis. Regards, Martin
On 9 August 2012 08:22, "Martin v. Löwis"
Am 09.08.12 01:26, schrieb Floris Bruynooghe:
According to the instructions this is the point where I ask for a slave name and password.
Sent in a private message.
Thanks, it seems to be working fine. I triggered a build for 27 and 3.x. I'm assuming other builds will just be triggered automatically when needed from now on?
Also, would it make sense to support OpenCSW more out of the box? Currently we carry some patches for setup.py in order to pick up e.g. sqlite from /opt/csw etc. Would there be an interest in supporting this?
If all that needs to be done is to add /opt/csw into search lists where a search list already exists, I see no problem doing so - except that this could be considered a new feature, so it might be only possible to do it for 3.4.
It is for 2.x, setup.py seems to have changed substantially in 3.x and I haven't built that yet with OpenCSW but I presume I just need to find the right place there too. I'll open an issue for it instead of discussing it here. Regards, Floris
On 9 August 2012 08:11, Antoine Pitrou
Le 09/08/2012 01:26, Floris Bruynooghe a écrit :
Also, would it make sense to support OpenCSW more out of the box? Currently we carry some patches for setup.py in order to pick up e.g. sqlite from /opt/csw etc. Would there be an interest in supporting this?
I don't know, what is OpenCSW? I think the answer also depends on the complexity of said patches.
OpenCSW is a community effort (CSW == Community SoftWare) to build a repository of GNU/Linux userland binaries for Solaris. It makes package management as simple as on GNU/Linux, e.g.: "pkgutil --install wget gcc4core libsqlite3_dev" which would otherwise be a very long and laborious exercise. It is very well maintained and I consider it a must for any Solaris box which isn't tightly locked down. As said in my other mail the patches are rather trivial but I will open an issue to discuss there. Regards, Floris
Hi,
On 8 August 2012 11:30, Stefan Krah
Could someone with access to a SPARC machine (perhaps with a modern version of Debian-sparc) grab a clone from http://hg.python.org/cpython/ and run the test suite?
One more thing that might be interesting, the OpenCSW project provides access to their build farm to upstream maintainers. They say various/all versions of solaris are available and compilers etc are already setup, but I have never tried this out. In case someone is interested in this, see http://www.opencsw.org/extend-it/signup/to-upstream-maintainers/ Regards, Floris
Sent in a private message.
Thanks, it seems to be working fine.
Actually, there appears to be a glitch in the network setup: it appears that connections to localhost are not possible in your zone. The tests fail with an assertion self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) AssertionError: 128 != 146 where 128 is ENETUNREACH. It would be good if localhost was reachable on a build slave. Also, if you haven't done so, please make sure that the build slave restarts when the zone or the machine is restarted. Don't worry that restarting will abort builds in progress - that happens from time to time on any slave. Regards, Martin
Floris Bruynooghe
One more thing that might be interesting, the OpenCSW project provides access to their build farm to upstream maintainers. They say various/all versions of solaris are available and compilers etc are already setup, but I have never tried this out. In case someone is interested in this, see http://www.opencsw.org/extend-it/signup/to-upstream-maintainers/
Thanks for the link. Perhaps I'll try to get an account there. Stefan Krah
On 10 August 2012 06:48, "Martin v. Löwis"
Actually, there appears to be a glitch in the network setup: it appears that connections to localhost are not possible in your zone. The tests fail with an assertion
self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) AssertionError: 128 != 146
where 128 is ENETUNREACH. It would be good if localhost was reachable on a build slave.
The localhost network seems fine, which is shown by the test_socket test just before. I think the issue here is that socket.create_connection iterates over the result of socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) which returns [(2, 2, 0, '', ('127.0.0.1', 0)), (26, 2, 0, '', ('::1', 0, 0, 0))] on this host. The first result is tried and returns ECONNREFUSED but then the second address is tried and this returns ENETUNREACH because this host has not IPv6 network configured. And create_connection() raises the last exception it received. If getaddrinfo() is called with the AI_ADDRCONFIG flag then it will only return the IPv4 version of localhost. I've created an issue to track this: http://bugs.python.org/issue15617
Also, if you haven't done so, please make sure that the build slave restarts when the zone or the machine is restarted. Don't worry that restarting will abort builds in progress - that happens from time to time on any slave.
I'll check this, thanks for the reminder. Regards, Floris
participants (4)
-
"Martin v. Löwis"
-
Antoine Pitrou
-
Floris Bruynooghe
-
Stefan Krah