2017-04-03 16:00 GMT+02:00 David Edelsohn email@example.com:
I have fixed the Git problem. I thought that it had been working.
Thank you :-) I see tests running: http://buildbot.python.org/all/builders/PPC64%20AIX%203.x/builds/567/steps/t...
The testsuite failures on AIX are issues with the AIX kernel and C Library, often corner cases. I don't want to get into arguments about the POSIX standard. Some of the issues are actual conformance issues and some are different interpretations of the standard.
Many tests of the CPython test suite actually test also the OS, not only Python. But in many cases, it's complex to "mock" the OS (unittest.mock), since tested functions are thin wrapper to syscals.
Addressing the problems in AIX is a slow process. If the failing testcases are too annoying, I would recommend to skip the testcases.
So yeah, let's skip such tests on AIX!
Despite the testsuite failures, Python builds and runs on AIX for the vast majority of users and applications. I don't see the benefit in dropping support for a platform that functions because it doesn't fully pass the testsuite.
It's a limitation of buildbot, the output is basically binary: pass (green) or fail (red). If buildbot would understand test results (parse stdout, or Python produces a output file easier to parse), it would be possible to detect *regressions* compared to previous runs. Using Jenkins and JUnit, I know that it's possible to do that. But I'm not volunteer to handle buildbot or switch to Jenkins, I can only help to fix buildbot issues ;-)
My point is that with the current tools, it's hard to distinguish regressions to "known failures": buildbots failing since many months.