[py-dev] Tricky parametrization problem

Tomi Pieviläinen tomi.pievilainen at iki.fi
Mon Dec 17 11:24:41 CET 2012


I have a a simulation that has different kinds of forces to simulate,
each a different function. I have implemented those functions in
several modules: baseline python (Numpy), C (accessed via ctypes) and
PyCUDA. I need to make sure I get the same results from the optimized
versions compared to the slow but correct python version. Later I will
be adding modules/implementations for cython, numba etc. too. Not all
modules are always available. For example cuda depends on the current
system hardware, so I need to skip some tests.

For each opimized module I want to take the func1, func2 and func3,
and compare them to the baseline func1, func2 and func3. So I was
hoping I could do test functions with

@parametrize(('basefunc', 'fastfunc'),
             [(func1, mod.func1),
              (func2, mod.func2),
              (func3, mod.func3)])
@parametrize(('x, 'y'),
             [(numpy.randn(8), numpy.randn(8)),
              (numpy.randn(16), numpy.randn(16)),
              ... ])
def test_random_arrays(x, y, basefunc, fastfunc)
    assert_arrays_almost_equal(basefunc(x, y), fastfunc(x, y))


and have the mod parametrized. But I can't use @parametrize for
that, since the modules can't be always imported.

So I was hoping then to have a fixture for the module or the fastfunc,
that would call pytest.skip() if the import fails, but I would need
the fixture return a bunch of parameters for just one call which
I've understood is not possible.

I'm not really sure if what I'm aiming at makes any sense, but hopefully
someone has an idea on how to do this in a clean way.


-- 
Tomi Pieviläinen, +358 400 487 504
A: Because it disrupts the natural way of thinking.
Q: Why is top posting frowned upon?



More information about the Pytest-dev mailing list