On 5/31/06, Tim Peters <tim.peters@gmail.com> wrote:
[Brett Cannon]
> Where would this list be stored?  If the import fails the module is not
> imported, so how do you get access to the list?

Oh, details ;-)
> It could be some function that *must* be called before any imports
> that registers with test.test_support what modules it can not be
> able to import and then check that way.

That would work.  Or the list could be stored in a stylized comment,
which regrtest.py sucks up via searching the source when it catches an
import error.

That could work as well, although I am personally not a fan of requiring special formatting in comments.

Another option is a custom, test-only import function that returns a module object and raises TestSkipped if the import fails.  That would allow us to not catch ImportError and be very explicit about what imports can fail and not be an issue.

> Otherwise the list would just have to be external to all tests (which
> might not be so bad to have a master list of modules we think can
> be skipped on various platforms and then what modules can fail on
> import for what tests).

That can work too.  The only thing that can't work is the current scheme.

Nope.

A hole is that whether an import failure should mean "test skipped"
also depends on the platform, like that test_startfile should suffer
an import failure unless you're on Windows (in which case it should
not).  It can also depend on how the platform is configured (e.g.,
test_bsddb3 or test_zipimport depend on whether you've installed
external libraries).  But I'm happy to take improvement over
perfection.


OK, this should get kicked over to python-dev [which I am bcc:ing on this].  It looks like we need to have the testing framework have a way to mark tests that are implementation-specific (so that non-CPython interpreters can skip them), a better way to mark platform-specific tests, and a way to specify which imports failing signifies that a test just should not run compared to an actual error existing.