Where should I put tests when packaging python modules?
Hi, Where should I put tests when packaging python modules? I want a "cowpath", an "obvious way" Dear experts, please decide: inside the module like this answer: http://stackoverflow.com/questions/5341006/where-should-i-put-tests-when-pac... XOR outside the module like this: https://github.com/pypa/sampleproject/tree/master/tests I think there is no need to hurry. Let's wait one week, and then check which one is preferred. Regards, Thomas Güttler -- http://www.thomas-guettler.de/
On Oct 6, 2015, at 12:07 AM, Thomas Güttler
wrote: Hi,
Where should I put tests when packaging python modules?
I want a "cowpath", an "obvious way"
Dear experts, please decide:
inside the module like this answer:
http://stackoverflow.com/questions/5341006/where-should-i-put-tests-when-pac...
XOR
outside the module like this:
https://github.com/pypa/sampleproject/tree/master/tests
I think there is no need to hurry. Let's wait one week, and then check which one is preferred.
Regards, Thomas Güttler
Inside the package. If you put your tests outside your package, then you can't install the tests for two packages simultaneously, because everyone's tests are just in the top-level package "tests". This tends to infest the whole package, since then tests import things from each other using 'from tests import ...'. This is recommended by the hitchhiker's guide, and seconded by http://as.ynchrono.us/2007/12/filesystem-structure-of-python-project_21.html. -glyph
On Tue, Oct 6, 2015 at 10:20 AM, Glyph Lefkowitz
If you put your tests outside your package, then you can't install the tests for two packages simultaneously, because everyone's tests are just in the top-level package "tests". This tends to infest the whole package, since then tests import things from each other using 'from tests import ...'. This is recommended by the hitchhiker's guide, and seconded by < http://as.ynchrono.us/2007/12/filesystem-structure-of-python-project_21.html
.
I don't want to be harsh here but arguments would be way more interesting to discuss, as opposed to giving links to JPC's outdated packaging guide. He's an exceptional developer but many of the things inside that article are out of fashion (he disagrees about using console_scripts entrypoints, doesn't understand what src is for and so on). I don't think anyone would ever intentionally put a "tests" package in site-packages, why would you mention that? Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On October 6, 2015 at 3:21:04 AM, Glyph Lefkowitz (glyph@twistedmatrix.com) wrote:
Inside the package.
If you put your tests outside your package, then you can't install the tests for two packages simultaneously, because everyone's tests are just in the top-level package "tests". This tends to infest the whole package, since then tests import things from each other using 'from tests import ...'. This is recommended by the hitchhiker's guide, and seconded by .
I dislike putting tests inside the package. The supposed benefit is that anyone can run the tests at anytime, but I don't find that actually true because it means (as someone else pointed out) that you either have to depend on all your test dependencies or that there is already an additional step to install them. If you're going to have to locate and install the test dependencies, then you might as well fetch the tarball with tests as well. Someone suggested setuptools test_requires, but that only functions when you have a setup.py available and you execute ``setup.py test``. It does not help you at all once the package is installed and the sdist is gone. I also don't think people actually run the tests when they are installed in any significant number, at least I've never once in my life done it or even had a desire to do it. Some projects have test suites which are significantly large too. The PyCA cryptography project for instance, has to ship it's vectors as an additional package to reduce the size of the final build product. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, Oct 6, 2015 at 10:54 AM, Donald Stufft
On October 6, 2015 at 3:21:04 AM, Glyph Lefkowitz (glyph@twistedmatrix.com) wrote:
Inside the package.
If you put your tests outside your package, then you can't install the tests for two packages simultaneously, because everyone's tests are just in the top-level package "tests". This tends to infest the whole package, since then tests import things from each other using 'from tests import ...'. This is recommended by the hitchhiker's guide, and seconded by .
I dislike putting tests inside the package.
The supposed benefit is that anyone can run the tests at anytime, but I don't find that actually true because it means (as someone else pointed out) that you either have to depend on all your test dependencies or that there is already an additional step to install them. If you're going to have to locate and install the test dependencies, then you might as well fetch the tarball with tests as well.
Someone suggested setuptools test_requires, but that only functions when you have a setup.py available and you execute ``setup.py test``. It does not help you at all once the package is installed and the sdist is gone.
I also don't think people actually run the tests when they are installed in any significant number, at least I've never once in my life done it or even had a desire to do it.
The significant number is not so relevant if you buy the argument that it is useful to downstream packagers: it may be a few users, but those are crucial. I also forgot to mention that the ability to test something without building is crucial when you want to distribute binaries. David
Some projects have test suites which are significantly large too. The PyCA cryptography project for instance, has to ship it's vectors as an additional package to reduce the size of the final build product.
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On October 6, 2015 at 6:18:32 AM, David Cournapeau (cournape@gmail.com) wrote:
The significant number is not so relevant if you buy the argument that it is useful to downstream packagers: it may be a few users, but those are crucial.
I also forgot to mention that the ability to test something without building is crucial when you want to distribute binaries.
Is it actually useful to them? None of the Linux downstreams I know of have ever mentioned a preference for it. As far as I know, the only preference they've ever expressed to me is that the tests are included in the sdist. FreeBSD relies on ``python setup.py test`` as it's preferred test invocation, so it apparently doesn't find it useful either. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, Oct 6, 2015 at 11:21 AM, Donald Stufft
On October 6, 2015 at 6:18:32 AM, David Cournapeau (cournape@gmail.com) wrote:
The significant number is not so relevant if you buy the argument that it is useful to downstream packagers: it may be a few users, but those are crucial.
I also forgot to mention that the ability to test something without building is crucial when you want to distribute binaries.
Is it actually useful to them? None of the Linux downstreams I know of have ever mentioned a preference for it. As far as I know, the only preference they've ever expressed to me is that the tests are included in the sdist.
It is at least useful to me, and I am packaging quite a few binaries.
FreeBSD relies on ``python setup.py test`` as it's preferred test invocation, so it apparently doesn't find it useful either.
I would like to hear their rationale before guessing. It is hard for me to imagine they would not rather test the binaries rather than from sources. Something as simple as making sure you have not forgotten runtime dependencies becomes much easier this way. David
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On October 6, 2015 at 6:33:10 AM, David Cournapeau (cournape@gmail.com) wrote:
On Tue, Oct 6, 2015 at 11:21 AM, Donald Stufft wrote:
On October 6, 2015 at 6:18:32 AM, David Cournapeau (cournape@gmail.com) wrote:
The significant number is not so relevant if you buy the argument that it is useful to downstream packagers: it may be a few users, but those are crucial.
I also forgot to mention that the ability to test something without building is crucial when you want to distribute binaries.
Is it actually useful to them? None of the Linux downstreams I know of have ever mentioned a preference for it. As far as I know, the only preference they've ever expressed to me is that the tests are included in the sdist.
It is at least useful to me, and I am packaging quite a few binaries.
FreeBSD relies on ``python setup.py test`` as it's preferred test invocation, so it apparently doesn't find it useful either.
I would like to hear their rationale before guessing. It is hard for me to imagine they would not rather test the binaries rather than from sources. Something as simple as making sure you have not forgotten runtime dependencies becomes much easier this way.
I'm able to test runtime dependencies just fine without needing to put my tests inside of the package. To the extent anyone can actually test runtime dependencies in a test framework without actually depending on your test tools at runtime [1]. I really don't think either way is "better" to be honest. I think any attempts that one way is better than the other relies on nebulous edge cases that don't really happen much in reality. You'll tell me that you've found it useful to run tests against an installed distribution without having to fetch the original tarball, I'll tell you that I've found it useful to run a newer version of the test suite against an older installed copy of the library. In practice, I think that you'll be able to manage finding the tarball and running those tests against your installed distribution while I'd be able to manage to copy just the tests I care about running out of the test suite and run them manually. IOW, I don't think it really matters and people should just do whatever they want. [1] By this I mean, you'll never detect if you're missing a runtime dependency on something that your test dependencies include as part of their dependencies, since installing your test tools to run your tests would then trigger this missing dependency to be installed. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Oct 06, 2015, at 11:33 AM, David Cournapeau wrote:
I would like to hear their rationale before guessing. It is hard for me to imagine they would not rather test the binaries rather than from sources. Something as simple as making sure you have not forgotten runtime dependencies becomes much easier this way.
In Debian we can do both. It's usually good practice to run the package's test suite during the build process, on the unbuilt source tree. That doesn't work for all packages though (tox comes to mind as a recent example), so we *also* have a way to run the test suite on a built-and-installed version of the Debian binary package. I usually try to at least do an import test in this phase, so for some like tox, I'll do a more extensive test. In Ubuntu, failing the built-and-installed test (a.k.a. autopkgtest or DEP-8) will prevent a package from getting promoted from the -proposed channel to the release channel, which usually shields end users from seeing broken packages. Debian doesn't have this gateway in place yet. There is a way to run those on a local test build, so that's pretty nice. Cheers, -Barry
On Oct 06, 2015, at 06:21 AM, Donald Stufft wrote:
FreeBSD relies on ``python setup.py test`` as it's preferred test invocation, so it apparently doesn't find it useful either.
Oh how I wish there was a standard way to *declare* how to run the test suite, such that all our automated tools (or the humans :) didn't have to guess. At least in Debuntu though, we can pretty much make any of the usual ways work during package build. Cheers, -Barry
Barry Warsaw
On Oct 06, 2015, at 06:21 AM, Donald Stufft wrote:
FreeBSD relies on ``python setup.py test`` as it's preferred test invocation
Oh how I wish there was a standard way to *declare* how to run the test suite, such that all our automated tools (or the humans :) didn't have to guess.
I think the above describes the standard way of declaring the test runner: The ‘setup.py test’ command. Now, I lament that more Python projects don't *conform to* that standard, but at least it exists. -- \ “I have never made but one prayer to God, a very short one: ‘O | `\ Lord, make my enemies ridiculous!’ And God granted it.” | _o__) —Voltaire | Ben Finney
On Oct 07, 2015, at 08:51 AM, Ben Finney wrote:
I think the above describes the standard way of declaring the test runner: The ‘setup.py test’ command.
Now, I lament that more Python projects don't *conform to* that standard, but at least it exists.
It's *a* standard but not *the* standard, just from pure observation. Cheers, -Barry
On Wed, Oct 7, 2015 at 12:51 AM, Ben Finney
I think the above describes the standard way of declaring the test runner: The ‘setup.py test’ command.
Now, I lament that more Python projects don't *conform to* that standard, but at least it exists.
There's a very simple answer to that: easy_install (that's what `setup.py test` will use to install deps). It has several design issue wrt how packages are installed and how dependencies are managed. Lets not use `setup.py test`. It's either bad or useless. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
Am 07.10.2015 um 00:08 schrieb Ionel Cristian Mărieș:
On Wed, Oct 7, 2015 at 12:51 AM, Ben Finney
wrote: I think the above describes the standard way of declaring the test runner: The ‘setup.py test’ command.
Now, I lament that more Python projects don't *conform to* that standard, but at least it exists.
There's a very simple answer to that: easy_install (that's what `setup.py test` will use to install deps). It has several design issue wrt how packages are installed and how dependencies are managed.
Lets not use `setup.py test`. It's either bad or useless.
Sorry, I am not an expert in the area "packaging". I don't understand what you say. I thought "easy_install" is a very old and deprecated method. Why not use `setup.py test`? Regards, Thomas Güttler -- http://www.thomas-guettler.de/
On Wed, Oct 7, 2015 at 8:12 AM, Thomas Güttler wrote: I thought "easy_install" is a very old and deprecated method. Indeed it is. That why people put all sorts of custom "test" commands in
their setup.py to work around the deficiencies of the "test
" command setuptools provides. So we end up with lots of variations of
"how to use pytest to run tests via `setup.py test`", "how to use pip to
install deps, instead of what `setup.py test` normally does" and so on.
If you're gonna implement a test runner in your setup.py you might as well
use a supported and well maintained tool: tox. Why not use `setup.py test`? Because:
1. There's Tox, which does exactly that, and more. It's maintained. It
gets features.
2. The "test" command will install the "test_requires" dependencies as
eggs. You will end up with multiple versions of the same eggs right in your
source checkout.
3. The "test" command will install the "test_requires" dependencies with
easy_install. That means wheels cannot be used.
4. Because the builtin "test" command is so bare people tend to implement a
custom one. Everyone does something slightly different, and slightly buggy.
5. There's no established tooling that relies on `setup.py test`. There
isn't even a test result protocol like TAP [1] for it. Why use something so
limited and outdated if there's no practical advantage?
[1] https://testanything.org/
Thanks,
-- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On October 7, 2015 at 7:58:55 AM, Ionel Cristian Mărieș (contact@ionelmc.ro) wrote:
On Wed, Oct 7, 2015 at 8:12 AM, Thomas Güttler > > wrote:
Why not use `setup.py test`?
Because:
1. There's Tox, which does exactly that, and more. It's maintained. It gets features.
tox and setup.py test are not really equivalent. There’s no way (to my knowledge) to test the item outside of a virtual environment. This is important for downstreams who want to test that the package build and the tests successfully are executed in their environment, not within some virtual environment. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Wed, Oct 7, 2015 at 3:18 PM, Donald Stufft
tox and setup.py test are not really equivalent. There’s no way (to my knowledge) to test the item outside of a virtual environment. This is important for downstreams who want to test that the package build and the tests successfully are executed in their environment, not within some virtual environment.
Hmmmm ... you're right. But making Tox not use virtualenvs is not impossible - much alike to how Detox is working, we could have a "Tax" (just made that up) that just skips making any virtualenv. It's a matter of making two subclasses and a console_scripts entrypoint (I think). I think it's a good name: ``use Tax instead of Tox if you wanna "tax" your global site-packages`` :-) We only need someone to do it. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Wed, Oct 7, 2015 at 4:42 PM, Ionel Cristian Mărieș
On Wed, Oct 7, 2015 at 3:18 PM, Donald Stufft
wrote: tox and setup.py test are not really equivalent. There’s no way (to my knowledge) to test the item outside of a virtual environment. This is important for downstreams who want to test that the package build and the tests successfully are executed in their environment, not within some virtual environment.
Hmmmm ... you're right. But making Tox not use virtualenvs is not impossible - much alike to how Detox is working, we could have a "Tax" (just made that up) that just skips making any virtualenv. It's a matter of making two subclasses and a console_scripts entrypoint (I think). I think it's a good name: ``use Tax instead of Tox if you wanna "tax" your global site-packages`` :-)
Just for kicks, I verified this, it's not hard at all: https://pypi.python.org/pypi/tax Barry may want to look at it, in case he has too many tox.ini files to copy-paste from :-) Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Oct 07, 2015, at 08:18 AM, Donald Stufft wrote:
tox and setup.py test are not really equivalent. There’s no way (to my knowledge) to test the item outside of a virtual environment. This is important for downstreams who want to test that the package build and the tests successfully are executed in their environment, not within some virtual environment.
I usually do not use tox to test a package when building it for Debian. It's pretty easy to extract the actual command used to run the test suit from the tox.ini and that's what I put in the debian/rules file. It can make things build a little more reliably, but also eliminates a build dependency on tox. Cheers, -Barry
On Oct 7, 2015 6:58 AM, "Ionel Cristian Mărieș"
On Wed, Oct 7, 2015 at 8:12 AM, Thomas Güttler <
I thought "easy_install" is a very old and deprecated method.
Indeed it is. That why people put all sorts of custom "test" commands in
guettliml@thomas-guettler.de> wrote: their setup.py to work around the deficiencies of the "test
" command setuptools provides. So we end up with lots of variations of "how to use pytest to run tests via `setup.py test`", "how to use pip to install deps, instead of what `setup.py test` normally does" and so on.
If you're gonna implement a test runner in your setup.py you might as well use a supported and well maintained tool: tox.
Why not use `setup.py test`?
Because:
1. There's Tox, which does exactly that, and more. It's maintained. It gets features.
2. The "test" command will install the "test_requires" dependencies as eggs. You will end up with multiple versions of the same eggs right in your
Tox rocks. * detox can run concurrent processes: https://pypi.python.org/pypi/detox/ * TIL timeit.default_timer measures **wall time** by default and not CPU time: concurrent test timings are likely different from linear tests run on an machine with load source checkout. * is there no way around this? * is this required / spec'd / fixable?
3. The "test" command will install the "test_requires" dependencies with easy_install. That means wheels cannot be used.
would it be possible to add this to wheel? as if, after package deployment, in-situ tests are no longer relevant. (I think it wise to encourage TDD here)
4. Because the builtin "test" command is so bare people tend to implement a custom one. Everyone does something slightly different, and slightly buggy.
5. There's no established tooling that relies on `setup.py test`. There isn't even a test result protocol like TAP [1] for it. Why use something so
* README.rst test invocation examples (all, subset, one) * Makefile (make test; [vim] :make) * python setup.py nosetests http://nose.readthedocs.org/en/latest/api/commands.html * python setup.py [test] https://pytest.org/latest/goodpractises.html#integrating-with-setuptools-pyt... limited and outdated if there's no practical advantage? * xUnit XML: https://westurner.org/wiki/awesome-python-testing#xunit-xml ``` xUnit XML⬅ https://en.wikipedia.org/wiki/XUnit https://nose.readthedocs.org/en/latest/plugins/xunit.html http://nosexunit.sourceforge.net/ https://pytest.org/latest/usage.html#creating-junitxml-format-files https://github.com/xmlrunner/unittest-xml-reporting https://github.com/zandev/shunit2/compare/master...jeremycarroll:master ``` * TAP protocol
Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Wed, Oct 7, 2015 at 3:20 PM, Wes Turner
2. The "test" command will install the "test_requires" dependencies as eggs. You will end up with multiple versions of the same eggs right in your source checkout.
* is there no way around this? * is this required / spec'd / fixable?
It's not that bad now, recent setuptools put the eggs in a ".eggs" dir - so it's not as messy as before.
3. The "test" command will install the "test_requires" dependencies with easy_install. That means wheels cannot be used.
would it be possible to add this to wheel?
It's up to the maintainers of wheel/setuptools to figure this one out (or not) I think. Either way, you should search through the distutils-sig archives for clues/intentions, eg: https://mail.python.org/pipermail/distutils-sig/2014-December/thread.html#25... Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Tue, Oct 6, 2015 at 6:08 PM, Ionel Cristian Mărieș
On Wed, Oct 7, 2015 at 12:51 AM, Ben Finney
wrote: I think the above describes the standard way of declaring the test runner: The ‘setup.py test’ command.
Now, I lament that more Python projects don't *conform to* that standard, but at least it exists.
There's a very simple answer to that: easy_install (that's what `setup.py test` will use to install deps). It has several design issue wrt how packages are installed and how dependencies are managed.
Lets not use `setup.py test`. It's either bad or useless.
Says who? Many of the projects I'm involved in use `setup.py test` exclusively and for good reason--they all have C and/or Cython extension modules that need to be built for the tests to even run. Only setup.py knows about those extension modules and how to find and build them. Using `setup.py test` ensures that everything required to run the package (including runtime dependencies) is built and ready, and then the tests can start. Without it, we would have to tell developers to go through a build process first and then make sure they're running the tests on the built code. `setup.py test` makes it a no-brainer. For pure Python packages I think it's less important and can usually rely on "just run 'nose', or 'py.test'" (or "tox" but that's true regardless of how the tests are invoked outside of tox). Best, Erik
On Wed, Oct 7, 2015 at 6:13 PM, Erik Bray
Lets not use `setup.py test`. It's either bad or useless.
Says who? Many of the projects I'm involved in use `setup.py test` exclusively and for good reason--they all have C and/or Cython extension modules that need to be built for the tests to even run. Only setup.py knows about those extension modules and how to find and build them. Using `setup.py test` ensures that everything required to run the package (including runtime dependencies) is built and ready,
Well ok, then it's not useless. :-) For pure Python packages I think it's less important and can usually
rely on "just run 'nose', or 'py.test'" (or "tox" but that's true regardless of how the tests are invoked outside of tox).
That implies you would be testing code that you didn't install. That allows preventable mistakes, like publishing releases on PyPI that don't actually work, or do not even install at all (because you didn't test that). `setup.py test` doesn't really allow you to fully test that part, but Tox does. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Wed, Oct 7, 2015 at 11:31 AM, Ionel Cristian Mărieș
On Wed, Oct 7, 2015 at 6:13 PM, Erik Bray
wrote: Lets not use `setup.py test`. It's either bad or useless.
Says who? Many of the projects I'm involved in use `setup.py test` exclusively and for good reason--they all have C and/or Cython extension modules that need to be built for the tests to even run. Only setup.py knows about those extension modules and how to find and build them. Using `setup.py test` ensures that everything required to run the package (including runtime dependencies) is built and ready,
Well ok, then it's not useless. :-)
For pure Python packages I think it's less important and can usually rely on "just run 'nose', or 'py.test'" (or "tox" but that's true regardless of how the tests are invoked outside of tox).
That implies you would be testing code that you didn't install. That allows preventable mistakes, like publishing releases on PyPI that don't actually work, or do not even install at all (because you didn't test that). `setup.py test` doesn't really allow you to fully test that part, but Tox does.
Which, incidentally, is a great reason for installable tests :) Running in the source tree is great for development. But when preparing a release it's great to be able to create an sdist, install that into a virtualenv, and run `package.test()` or `python -m package.tests` or whatever. Occasionally catches problems with the source dist if nothing else. Best, Erik
On Wed, Oct 7, 2015 at 6:37 PM, Erik Bray
Which, incidentally, is a great reason for installable tests :)
Not really. Doesn't matter where you have the tests. It matters where you have the code being tested. Tests being installed is a mere consequence of the location of tests.
Running in the source tree is great for development. But when preparing a release it's great to be able to create an sdist, install that into a virtualenv, and run `package.test()` or `python -m package.tests` or whatever. Occasionally catches problems with the source dist if nothing else.
As I said, I like the idea. It's just that it's not feasible right now. Lets go over the issues again: * Tests too bulky (pyca/cryptography) * Tests can't be installed at all: https://github.com/getpelican/pelican/issues/1409 * Not clear how to install test dependencies. tests_require? extras? no deps? What about version conflicts and way too many deps being installed. Dependencies are like cars, they are very useful but too many of them create problems. * Real problems like standardized test output or run protocol are not solved at all. Little benefit of doing it like this if you can't build good CI tools around this. * Workflows are under-specified. User are not guided to make quality releases on PyPI. Maybe we should have a PEP that would specify/propose some concrete solutions to all those? Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Tue, Oct 06, 2015 at 05:21:27PM -0400, Barry Warsaw wrote:
On Oct 06, 2015, at 06:21 AM, Donald Stufft wrote:
FreeBSD relies on ``python setup.py test`` as it's preferred test invocation, so it apparently doesn't find it useful either.
Oh how I wish there was a standard way to *declare* how to run the test suite, such that all our automated tools (or the humans :) didn't have to guess.
I have hopes for 'tox.ini' becoming the standard way to test a Python project. Marius Gedminas -- "Actually, the Singularity seems rather useful in the entire work avoidance field. "I _could_ write up that report now but if I put it off, I may well become a weakly godlike entity, at which point not only will I be able to type faster but my comments will be more on-target." - James Nicoll
On Oct 7, 2015 12:44 AM, "Marius Gedminas"
On Tue, Oct 06, 2015 at 05:21:27PM -0400, Barry Warsaw wrote:
On Oct 06, 2015, at 06:21 AM, Donald Stufft wrote:
FreeBSD relies on ``python setup.py test`` as it's preferred test
invocation,
so it apparently doesn't find it useful either.
Oh how I wish there was a standard way to *declare* how to run the test suite, such that all our automated tools (or the humans :) didn't have to guess.
make test
I have hopes for 'tox.ini' becoming the standard way to test a Python project.
* https://tox.readthedocs.org/en/latest/config.html * https://github.com/docker/docker-registry/blob/master/tox.ini #flake8 * dox = docker + tox | PyPI: https://pypi.python.org/pypi/dox | Src: https://git.openstack.org/cgit/stackforge/dox/tree/dox.yml * docker-compose.yml | Docs: https://docs.docker.com/compose/ | Docs: https://github.com/docker/compose/blob/master/docs/yml.md * https://github.com/kelseyhightower/kubernetes-docker-files/blob/master/docke... * https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pods.md... * https://github.com/docker/docker/issues/8781 ( pods ( containers ) ) * http://docs.buildbot.net/latest/tutorial/docker.html * http://docs.buildbot.net/current/tutorial/docker.html#building-and-running-b... tox.ini often is not sufficient: * [Makefile: make test/tox] * setup.py * tox.ini * docker/platform-ver/Dockerfile * [dox.yml] * [docker-compose.yml] * [CI config] * http://docs.buildbot.net/current/manual/configuration.html * jenkins-kubernetes, jenkins-mesos
Marius Gedminas -- "Actually, the Singularity seems rather useful in the entire work
avoidance
field. "I _could_ write up that report now but if I put it off, I may well become a weakly godlike entity, at which point not only will I be able to type faster but my comments will be more on-target." - James Nicoll
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Oct 06, 2015, at 05:54 AM, Donald Stufft wrote:
I dislike putting tests inside the package.
I'm a big fan of putting the tests inside the package. I've often looked at a package's tests to get a better understanding of something that was unclear for the documentation, or didn't work the way I expected. Having the tests there in the installed package makes it easier to refer to. I also find that with tox+nose2 (my preferred one-two punch for testing), it makes it quite easy to find and run the full test suite or individual tests based on a regexp pattern. I also like the symmetry of having a docs/ directory for doctests and a tests/ directory for unittests. For complex packages with lots of subpackages, I have lots of tests/ directories, so that the unitests are near to the code they test. This way the source tree gets organized for free without additional complexity in an outside-the-package tests tree. YMMV, -Barry
On October 6, 2015 at 5:20:03 PM, Barry Warsaw (barry@python.org) wrote:
On Oct 06, 2015, at 05:54 AM, Donald Stufft wrote:
I dislike putting tests inside the package.
I'm a big fan of putting the tests inside the package. I've often looked at a package's tests to get a better understanding of something that was unclear for the documentation, or didn't work the way I expected. Having the tests there in the installed package makes it easier to refer to. I also find that with tox+nose2 (my preferred one-two punch for testing), it makes it quite easy to find and run the full test suite or individual tests based on a regexp pattern. I also like the symmetry of having a docs/ directory for doctests and a tests/ directory for unittests.
For complex packages with lots of subpackages, I have lots of tests/ directories, so that the unitests are near to the code they test. This way the source tree gets organized for free without additional complexity in an outside-the-package tests tree.
I’m not sure I understand what you’re advocating here, it sounds like you want your tests at something like mycoolproject/tests so that they are importable from mycoolproject.tests… but then you talk about symmetry with docs/ and tests/ which sounds more like you have top level directories for tests/ docs/ and then mycoolproject/. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Oct 06, 2015, at 05:41 PM, Donald Stufft wrote:
I’m not sure I understand what you’re advocating here, it sounds like you want your tests at something like mycoolproject/tests so that they are importable from mycoolproject.tests… but then you talk about symmetry with docs/ and tests/ which sounds more like you have top level directories for tests/ docs/ and then mycoolproject/.
Ah, sorry for being unclear. I put tests in myproj/tests so yes they are importable via myproj.tests. I also put __init__.py's in my docs/ directory so the directory, but not the individual .rst doc files are also importable via myproj.docs. I do this because of the handy nose2 plugin I cargo cult around that allows me to run individual tests or doctests with a command line switch. If I have subpackages, the pattern repeats, so e.g. myproj/subpkg1/tests -> import myproj.subpkg1.tests myproj/subpkg2/docs -> import myproj.subpkg2.docs Cheers, -Barry
Barry Warsaw
I'm a big fan of putting the tests inside the package. I've often looked at a package's tests to get a better understanding of something that was unclear for the documentation, or didn't work the way I expected. Having the tests there in the installed package makes it easier to refer to.
That doesn't follow, or I'm not understanding you. If you have the tests in the source package, as is being advocated, you have the tests available for reference. So the *relative location* of the tests, within that source tree, doesn't argue for what you're saying. Are you arguing the separate point of whether tests should be *installed* with the package? -- \ “We now have access to so much information that we can find | `\ support for any prejudice or opinion.” —David Suzuki, 2008-06-27 | _o__) | Ben Finney
On Oct 07, 2015, at 08:54 AM, Ben Finney wrote:
Barry Warsaw
writes: I'm a big fan of putting the tests inside the package. I've often looked at a package's tests to get a better understanding of something that was unclear for the documentation, or didn't work the way I expected. Having the tests there in the installed package makes it easier to refer to.
That doesn't follow, or I'm not understanding you.
If you have the tests in the source package, as is being advocated, you have the tests available for reference. So the *relative location* of the tests, within that source tree, doesn't argue for what you're saying.
Since I'm not sure I follow that, I'll answer the question you asked:
Are you arguing the separate point of whether tests should be *installed* with the package?
Yes. We've had this conversation before in the context of Debian package sponsorship. I know and respect that you disagree. Cheers, -Barry
Barry Warsaw
On Oct 07, 2015, at 08:54 AM, Ben Finney wrote:
Barry Warsaw
writes: I'm a big fan of putting the tests inside the package. I've often looked at a package's tests to get a better understanding of something that was unclear for the documentation, or didn't work the way I expected. Having the tests there in the installed package makes it easier to refer to.
That doesn't follow, or I'm not understanding you. […]
Are you arguing the separate point of whether tests should be *installed* with the package?
Yes. We've had this conversation before in the context of Debian package sponsorship. I know and respect that you disagree.
Okay. That's quite an orthogonal dimension, though, to the *relative location* of tests within the source tree. So “I'm a big fan of putting tests inside the [Python] package [directory]” can't be motivated by “Having the tests there in the installed package”. The two aren't related, AFAICT. -- \ “There was a point to this story, but it has temporarily | `\ escaped the chronicler's mind.” —Douglas Adams | _o__) | Ben Finney
On Oct 07, 2015, at 09:46 AM, Ben Finney wrote:
So “I'm a big fan of putting tests inside the [Python] package [directory]” can't be motivated by “Having the tests there in the installed package”. The two aren't related, AFAICT.
It makes it easier for sure. When the tests are inside the package, nothing special has to be done; you just install the package and the tests subdirectories come along for the ride. If the tests are outside the package then you first have to figure out where they're going to go when they're installed, and then do something special to get them there. Cheers, -Barry
Thomas Güttler
Where should I put tests when packaging python modules?
When packaging them? The same place they go when not packaging them :-)
I want a "cowpath", an "obvious way"
For me, the obvious way is to have:
outside the module like this: https://github.com/pypa/sampleproject/tree/master/tests
and have ‘setup.py’ ensure the tests are distributed with the source package, but not installed.
I think there is no need to hurry. Let's wait one week, and then check which one is preferred.
More important than which is preferred, we should use the one which is best (regardless how popular it may be). So instead of just hearing votes, we should examine reasoned argument in favour of good options. -- \ “Visitors are expected to complain at the office between the | `\ hours of 9 and 11 a.m. daily.” —hotel, Athens | _o__) | Ben Finney
On Tue, 6 Oct 2015 09:07:46 +0200
Thomas Güttler
Dear experts, please decide:
inside the module like this answer:
http://stackoverflow.com/questions/5341006/where-should-i-put-tests-when-pac...
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
outside the module like this:
There is no actual reason to do that except win a couple kilobytes if you are distributing your package on floppy disks for consumption on Z80-based machines with 64KB RAM. Even Python *itself* puts its test suite inside the standard library, not outside it (though some Linux distros may strip it away). Try "python -m test.regrtest" (again, this may fail if your distro decided to ship the test suite in a separate package). The PyP"A" should definitely fix its sample project to reflect good practices. Regards Antoine.
On 6 October 2015 at 08:51, Antoine Pitrou
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
One inconvenience with this is that if you use an external testing framework like nose or pytest, you either need to make your project depend on it, or you need to document that "python -m mypackage.tests" has additional dependencies that are not installed by default. With an external tests directory, the testing framework is just another "development requirement". It's only a minor point, conceded.
The PyP"A" should definitely fix its sample project to reflect good practices.
+1 on the sample project following recommended guidelines. When I originally wrote that project, the consensus (such as it was) was in favour of tests outside the installed project. But that may have simply reflected the group of people who responded at the time. It does however have the advantage that it's consistent with how other PyPA projects like pip and virtualenv structure their tests. The big problem is that I don't think there *is* a consensus on best practice. Both options have their supporters. It would also be very easy to take the view that the PyPA sample project should omit the test directory altogether, as it's a sample for the *packaging* guide, and development processes like testing are out of scope (that's why we don't include a documentation directory, or recommend sphinx, for example). Personally, I think if we go to that extreme, the sample project becomes useless as a "real world" template, but maybe people wanting a starter project template should be directed to projects such as cookiecutter instead. Paul
On Oct 6, 2015 3:17 AM, "Paul Moore"
On 6 October 2015 at 08:51, Antoine Pitrou
wrote: They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
One inconvenience with this is that if you use an external testing framework like nose or pytest, you either need to make your project depend on it, or you need to document that "python -m mypackage.tests" has additional dependencies that are not installed by default.
With an external tests directory, the testing framework is just another "development requirement".
It's only a minor point, conceded.
otherwise try/except imports - [ ] @skipif in stdlib would be helpful
The PyP"A" should definitely fix its sample project to reflect good practices.
+1 on the sample project following recommended guidelines. When I originally wrote that project, the consensus (such as it was) was in favour of tests outside the installed project. But that may have simply reflected the group of people who responded at the time. It does however have the advantage that it's consistent with how other PyPA projects like pip and virtualenv structure their tests.
many of these cookiecutter Python project templates also define e.g. tox.ini and .travis.yml: https://github.com/audreyr/cookiecutter-pypackage/blob/master/README.rst#sim...
The big problem is that I don't think there *is* a consensus on best practice. Both options have their supporters.
It would also be very easy to take the view that the PyPA sample project should omit the test directory altogether, as it's a sample for the *packaging* guide, and development processes like testing are out of scope (that's why we don't include a documentation directory, or recommend sphinx, for example). Personally, I think if we go to that extreme, the sample project becomes useless as a "real world" template, but maybe people wanting a starter project template should be directed to projects such as cookiecutter instead.
I think it wise to encourage TDD. https://github.com/audreyr/cookiecutter/blob/master/README.rst#python
Paul _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Tue, 6 Oct 2015 09:17:22 +0100 Paul Moore
On 6 October 2015 at 08:51, Antoine Pitrou
wrote: They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
One inconvenience with this is that if you use an external testing framework like nose or pytest, you either need to make your project depend on it, or you need to document that "python -m mypackage.tests" has additional dependencies that are not installed by default.
With an external tests directory, the testing framework is just another "development requirement".
Doesn't / didn't setuptools have something called test_requires?
It would also be very easy to take the view that the PyPA sample project should omit the test directory altogether, as it's a sample for the *packaging* guide, and development processes like testing are out of scope (that's why we don't include a documentation directory, or recommend sphinx, for example).
That sounds like the best course to me. Regards Antoine.
On Tue, Oct 6, 2015 at 10:51 AM, Antoine Pitrou
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
Does that really make sense? I haven't heard of any user actually running tests that way. To be honest I haven't ever ran Python's own tests suite as part of a user installation. I've seen some projects that lump up lots of test data and crazy files in their packages tests and that created install issues (Pelican is one example, pretty sure there are others). On the other hand, if the user really wants to run the tests he can just get the sources (that would naturally include everything)? Seems odd to suggest something is a best practice without giving any clue of how test dependencies would be managed. Just because CPython does it doesn't mean libraries should. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
Ionel Cristian Mărieș
I've seen some projects that lump up lots of test data and crazy files in their packages tests and that created install issues.
On the other hand, if the user really wants to run the tests he can just get the sources (that would naturally include everything)?
Yes, this is a sensible approach: * The source package contains all the source files a developer would use to make further changes and test them. * The package for installation contains only those files useful run-time users, plus metadata (e.g. copyright information). I highly recommend it, and I would like the PyPA to also recommend the above approach. It does, though, require the acknowledgement of a separate *build* step in the development-and-release process. The build step is prior to the packaging step, and it generates run-time files from the collection of source files. That separation of a discrete build step is crucial for many good practices: generate documentation, integration testing, OS packaging, etc. -- \ “The restriction of knowledge to an elite group destroys the | `\ spirit of society and leads to its intellectual | _o__) impoverishment.” —Albert Einstein | Ben Finney
On Oct 06, 2015, at 08:18 PM, Ben Finney wrote:
Yes, this is a sensible approach:
* The source package contains all the source files a developer would use to make further changes and test them.
* The package for installation contains only those files useful run-time users, plus metadata (e.g. copyright information).
I generally don't both stripping out in-package tests when building binary packages for Debian. First because it's more work for (IMHO) dubious value and second because I think installed test files are actually useful sometimes. Cheers, -Barry
On Tue, Oct 6, 2015 at 9:30 AM, Ionel Cristian Mărieș
On Tue, Oct 6, 2015 at 10:51 AM, Antoine Pitrou
wrote: They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
Does that really make sense? I haven't heard of any user actually running tests that way. To be honest I haven't ever ran Python's own tests suite as part of a user installation.
It makes a lot of sense for downstream packagers. Allowing testing installed packages is also the simplest way to enable testing on target machines which are different from the build machines. David
On Oct 6, 2015 4:24 AM, "David Cournapeau"
On Tue, Oct 6, 2015 at 9:30 AM, Ionel Cristian Mărieș
On Tue, Oct 6, 2015 at 10:51 AM, Antoine Pitrou
wrote:
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
Does that really make sense? I haven't heard of any user actually running tests that way. To be honest I haven't ever ran Python's own tests suite as
wrote: part
of a user installation.
It makes a lot of sense for downstream packagers. Allowing testing installed packages is also the simplest way to enable testing on target machines which are different from the build machines.
self-testable programs are really ideal (e.g POST power-on self test) relevant recent topical discussion of e.g CRC and an optional '-t' preemptive CLI parameter: https://github.com/audreyr/cookiecutter-pypackage/pull/52
David
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Tue, Oct 6, 2015 at 12:51 PM, Wes Turner
self-testable programs are really ideal (e.g POST power-on self test) relevant recent topical discussion of e.g CRC and an optional '-t' preemptive CLI parameter: https://github.com/audreyr/cookiecutter-pypackage/pull/52
I would be interesting to talk about what's worth including in a "self-test" feature. Most suites aren't suitable for including as whole. You don't want to include integration (functional) tests for sure :-) There's also the ever-unanswered question of how to deal with test dependencies. Some will think that it's ok to always install them, but then again, after seeing how `mock` depends on `pbr` - and how `pbr` just decides to alter your `setup.py sdist/bdist_*` output without being asked or invoked has made me very wary of this practice. How I have to make sure my python installs have certain versions of `pbr` or no `pbr` at all, every time I want to build a package :-( Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On 2015-10-06 13:47:51 +0300 (+0300), Ionel Cristian Mărieș wrote: [...]
`pbr` just decides to alter your `setup.py sdist/bdist_*` output without being asked or invoked [...]
Assuming you're talking about https://launchpad.net/bugs/1483067 then it was fixed in 1.7.0. If you're still seeing it in later releases, a detailed bug report would be most appreciated since that's not at all the intent. -- Jeremy Stanley
On 6 October 2015 at 23:47, Ionel Cristian Mărieș
On Tue, Oct 6, 2015 at 12:51 PM, Wes Turner
wrote: self-testable programs are really ideal (e.g POST power-on self test) relevant recent topical discussion of e.g CRC and an optional '-t' preemptive CLI parameter: https://github.com/audreyr/cookiecutter-pypackage/pull/52
I would be interesting to talk about what's worth including in a "self-test" feature.
Most suites aren't suitable for including as whole. You don't want to include integration (functional) tests for sure :-)
There's also the ever-unanswered question of how to deal with test dependencies. Some will think that it's ok to always install them, but then again, after seeing how `mock` depends on `pbr` - and how `pbr` just decides to alter your `setup.py sdist/bdist_*` output without being asked or invoked has made me very wary of this practice. How I have to make sure my python installs have certain versions of `pbr` or no `pbr` at all, every time I want to build a package :-(
Hangon, there's clearly a *huge* gap in understanding here.
pbr does *not* modify *anyones* setup.py output unless its enabled.
***setuptools*** enables all its plugins unconditionally (because
entrypoints, yay), and then pbr has to explicitly *opt-out* of doing
anything.
Which *we do*.
There was a bug where a new thing added didn't have this opt-out, and
its since been fixed. There's a separate related bug that the opt-out
checking code isn't quite robust enough, and thats causing some havoc,
but there's a patch up to fix it and as soon as we have a reliable
test we'll be landing it and cutting a release.
If there was a way to plug into setuptools where pbr code wasn't
called on random packages, and we didn't have to manually opt-out, why
that would be brilliant.
[Note: I'm not saying setuptools is buggy - it chose this interface,
and thats fine, but the consequence is bugs like this that we have to
fix].
-Rob
--
Robert Collins
On Wed, Oct 7, 2015 at 2:23 AM, Robert Collins
Hangon, there's clearly a *huge* gap in understanding here.
pbr does *not* modify *anyones* setup.py output unless its enabled.
Unless it's >=1.7.0. You can't blame setuptools having entrypoints for pbr doing weird stuff to distributions by abusing said entrypoints. For reference: https://bugs.launchpad.net/pbr/+bug/1483067 There's nothing special about pbr here. It's not like it's the first package doing dangerous stuff (distribute, suprocess.run, pdbpp). I really like pdbpp, but you don't put that in production. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On October 6, 2015 at 7:47:16 PM, Ionel Cristian Mărieș (contact@ionelmc.ro) wrote:
Unless it's >=1.7.0. You can't blame setuptools having entrypoints for pbr doing weird stuff to distributions by abusing said entry points.
Relax. pbr had a bug, it was acknowledged as a bug and subsequently fixed. The design of setuptools entry points made it an easy bug to have happen than if setuptools entry points were designed differently. Using the entry points given to it by setuptools is hardly abusing anything. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, 6 Oct 2015 11:30:00 +0300
Ionel Cristian Mărieș
On Tue, Oct 6, 2015 at 10:51 AM, Antoine Pitrou
wrote: They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
Does that really make sense? I haven't heard of any user actually running tests that way. To be honest I haven't ever ran Python's own tests suite as part of a user installation.
There are several situations besides the "downstream packagers" use case mentioned somewhere else: * One of your users report a weird issue, you can ask them to run the test suite on their installation to check that nominal behaviour of the package is ok on their machine. If you don't ship the test suite, you have to ask them to do extra manual steps in order to do this verification, which can be cumbersome and delay proper response to the issue. * Your package requires non-Python data files for proper functioning, and you want to check the installation procedure puts them in the right place. The natural way to do that is to run the test suite on the installed package. Really, "ship the test suite" should be the norm and not shipping it should be the exception (if e.g. testing needs large data files). Regards Antoine.
On October 6, 2015 at 7:00:41 AM, Antoine Pitrou (solipsis@pitrou.net) wrote:
On Tue, 6 Oct 2015 11:30:00 +0300 Ionel Cristian Mărieș wrote:
On Tue, Oct 6, 2015 at 10:51 AM, Antoine Pitrou wrote:
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
Does that really make sense? I haven't heard of any user actually running tests that way. To be honest I haven't ever ran Python's own tests suite as part of a user installation.
There are several situations besides the "downstream packagers" use case mentioned somewhere else:
* One of your users report a weird issue, you can ask them to run the test suite on their installation to check that nominal behaviour of the package is ok on their machine. If you don't ship the test suite, you have to ask them to do extra manual steps in order to do this verification, which can be cumbersome and delay proper response to the issue.
I've never, in my entire life, been asked to run someone's test suite to validate a bug nor have I asked someone to run a test suite to validate a bug. This is still ignoring the problems of test dependencies as well so you'll still need to ask them to install some number of dependencies, and I think it's fairly trivial to ask someone to download a tarball, untar it, and run two commands.
* Your package requires non-Python data files for proper functioning, and you want to check the installation procedure puts them in the right place. The natural way to do that is to run the test suite on the installed package.
You're confusing "ships the test suite as part of the package" with "runs the test suite on the installed package". The two aren't really related, you can run the tests against an installed package trivially in either situation.
Really, "ship the test suite" should be the norm and not shipping it should be the exception (if e.g. testing needs large data files).
Regards
Antoine. _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, 6 Oct 2015 07:07:31 -0400
Donald Stufft
I've never, in my entire life [...]
Can I suggest your entire life is an anecdotal data point here?
This is still ignoring the problems of test dependencies
Only if your tests have dependencies that runtime doesn't have.
as well so you'll still need to ask them to install some number of dependencies, and I think it's fairly trivial to ask someone to download a tarball, untar it, and run two commands.
Any number of things can be described as trivial depending on the skillset and patience of the user. When users report a bug, they are not expecting to be asked to download and "untar" stuff. Not every user is a programmer.
You're confusing "ships the test suite as part of the package" with "runs the test suite on the installed package". The two aren't really related, you can run the tests against an installed package trivially in either situation.
It's not trivial, because if you aren't careful you'll be running them against the tarball / checkout instead (because of Python munging the PYTHONPATH behind your back, for example), and this can go unnoticed for a long time. By contrast, if you don't need a tarball / checkout to run them, you can guarantee they are run against the installed location. Regards Antoine.
On Tue, Oct 6, 2015 at 3:13 PM, Antoine Pitrou
On Tue, 6 Oct 2015 07:07:31 -0400 Donald Stufft
wrote: I've never, in my entire life [...]
Can I suggest your entire life is an anecdotal data point here?
Make that two anecdotal data points :-) Any number of things can be described as trivial depending on the
skillset and patience of the user. When users report a bug, they are not expecting to be asked to download and "untar" stuff. Not every user is a programmer.
But seriously now, your arguments are also anecdotal. Lets not pretend we're objective here. That sort of attitude is disingenuous and will quickly devolve this discussion to mere ad hominems.
It's not trivial, because if you aren't careful you'll be running them against the tarball / checkout instead (because of Python munging the PYTHONPATH behind your back, for example), and this can go unnoticed for a long time.
This is a flaw of the project layout really. If you don't isolate your sources from the import paths then you're probably testing the wrong way. In other words, you're probably not testing the installed code. Very few test runners change the current working directory by default [1], so it's better to just get a better project layout. pyca/cryptography https://github.com/pyca/cryptography is a good example. [1] trial is the only one that I know of, and it's hardly popular for testing anything but projects that use Twisted.
On Tue, 6 Oct 2015 15:34:38 +0300
Ionel Cristian Mărieș
Very few test runners change the current working directory by default [1], so it's better to just get a better project layout. pyca/cryptography https://github.com/pyca/cryptography is a good example.
The "src" convention is actually terrible when working with Python code, since suddenly you can't experiment easily on a VCS checkout, you have to do extra steps and/or write helper scripts for it. The fact that few Python projects, including amongst the most popular projects, use that convention mean it's really not considered a good practice, nor convenient. Regards Antoine.
On October 6, 2015 at 8:51:30 AM, Antoine Pitrou (solipsis@pitrou.net) wrote:
On Tue, 6 Oct 2015 15:34:38 +0300 Ionel Cristian Mărieș wrote:
Very few test runners change the current working directory by default [1], so it's better to just get a better project layout. pyca/cryptography is a good example.
The "src" convention is actually terrible when working with Python code, since suddenly you can't experiment easily on a VCS checkout, you have to do extra steps and/or write helper scripts for it.
Without doing it, you have very little assurances you’re actually testing against the installed project and not the project that's sitting in curdir. This is why pyca/cryptography does it, attempting to run the copy in . won't do anything but raise an exception since the .so won't be built. It doesn't really make experimenting in a VCS any harder, since all you need to do first is run ``pip install -e .`` and it will do a development install and add the src/ directory to sys.path.
The fact that few Python projects, including amongst the most popular projects, use that convention mean it's really not considered a good practice, nor convenient.
Of course, the same argument can be made for installing tests, since it's not very common. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, 6 Oct 2015 08:57:12 -0400
Donald Stufft
It doesn't really make experimenting in a VCS any harder, since all you need to do first is run ``pip install -e .`` and it will do a development install and add the src/ directory to sys.path.
That means you're suddently polluting your Python install with a development package. So either you create a dedicated virtualenv (more command-line boilerplate, including each time you switch from a project another) or you risk messing with other package installs. Regards Antoine.
On October 6, 2015 at 9:08:12 AM, Antoine Pitrou (solipsis@pitrou.net) wrote:
On Tue, 6 Oct 2015 08:57:12 -0400 Donald Stufft wrote:
It doesn't really make experimenting in a VCS any harder, since all you need to do first is run ``pip install -e .`` and it will do a development install and add the src/ directory to sys.path.
That means you're suddently polluting your Python install with a development package. So either you create a dedicated virtualenv (more command-line boilerplate, including each time you switch from a project another) or you risk messing with other package installs.
Unless your project has zero dependencies you’ll want to use a dedicated virtual environment anyways. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, 6 Oct 2015 09:33:03 -0400
Donald Stufft
On October 6, 2015 at 9:08:12 AM, Antoine Pitrou (solipsis@pitrou.net) wrote:
On Tue, 6 Oct 2015 08:57:12 -0400 Donald Stufft wrote:
It doesn't really make experimenting in a VCS any harder, since all you need to do first is run ``pip install -e .`` and it will do a development install and add the src/ directory to sys.path.
That means you're suddently polluting your Python install with a development package. So either you create a dedicated virtualenv (more command-line boilerplate, including each time you switch from a project another) or you risk messing with other package installs.
Unless your project has zero dependencies you’ll want to use a dedicated virtual environment anyways.
Not necessarily, you can share that environment with other projects. Regards Antoine.
On Tue, Oct 6, 2015 at 1:57 PM, Donald Stufft
On October 6, 2015 at 8:51:30 AM, Antoine Pitrou (solipsis@pitrou.net) wrote:
On Tue, 6 Oct 2015 15:34:38 +0300 Ionel Cristian Mărieș wrote:
Very few test runners change the current working directory by default [1], so
it's
better to just get a better project layout. pyca/cryptography is a good example.
The "src" convention is actually terrible when working with Python code, since suddenly you can't experiment easily on a VCS checkout, you have to do extra steps and/or write helper scripts for it.
Without doing it, you have very little assurances you’re actually testing against the installed project and not the project that's sitting in curdir. This is why pyca/cryptography does it, attempting to run the copy in . won't do anything but raise an exception since the .so won't be built.
It doesn't really make experimenting in a VCS any harder, since all you need to do first is run ``pip install -e .`` and it will do a development install and add the src/ directory to sys.path.
The fact that few Python projects, including amongst the most popular projects, use that convention mean it's really not considered a good practice, nor convenient.
Of course, the same argument can be made for installing tests, since it's not very common.
So we can actually get some data here :) At Enthought, we support around 400 packages. More than half of them are python packages, and we can make the reasonable assumption than the vast majority of those packages are fairly popular. More precisely, if I install all our supported packages on Linux: - I count ~ 308 packages by making the assumption that one directory in site-packages is a package (wrong but decent approximation) - I count as `with tests` a package with at least one directory called test in say package With those assumptions, I count 46 % packages with tests installed. So it is not "not very common". Granted, we are biased toward scientific packages which to include tests in the package, but we also package popular web/etc... packages. David
On 6 October 2015 at 14:51, David Cournapeau
Of course, the same argument can be made for installing tests, since it's not very common.
So we can actually get some data here :)
At Enthought, we support around 400 packages. More than half of them are python packages, and we can make the reasonable assumption than the vast majority of those packages are fairly popular.
More precisely, if I install all our supported packages on Linux: - I count ~ 308 packages by making the assumption that one directory in site-packages is a package (wrong but decent approximation) - I count as `with tests` a package with at least one directory called test in say package
With those assumptions, I count 46 % packages with tests installed. So it is not "not very common". Granted, we are biased toward scientific packages which to include tests in the package, but we also package popular web/etc... packages.
Interesting. Prompted by this, I did a check on my Python 3.4 installation. 84 packages, of which 21 have a "tests" subdirectory containing __init__.py and 4 have a "test" subdirectory containing __init__.py. That's 25%, which is certainly higher than I would have guessed. Of course, the implication is that 75% (or 54% in David's case) use test suites *not* installed with the package :-) But I think it's fair to say that installing the tests with the package is a common enough model, and it clearly does have some benefits. To answer the OP's question, I stand by my original impression, which is that "it depends" :-) I don't want the sample project to *not* include tests, so as far as that is concerned, I'm going with "the status quo wins there" and suggesting there's no reason to change. Paul
On 2015-10-06 15:10:59 +0100 (+0100), Paul Moore wrote: [...]
That's 25%, which is certainly higher than I would have guessed. Of course, the implication is that 75% (or 54% in David's case) use test suites *not* installed with the package :-) [...]
It seems rather optimistic to assume that 100% of projects have test suites. More accurately, 75% either have no tests _or_ keep them outside the package. ;) -- Jeremy Stanley
On Tue, Oct 6, 2015 at 9:51 AM, David Cournapeau
More precisely, if I install all our supported packages on Linux: - I count ~ 308 packages by making the assumption that one directory in site-packages is a package (wrong but decent approximation) - I count as `with tests` a package with at least one directory called test in say package
With those assumptions, I count 46 % packages with tests installed. So it is not "not very common". Granted, we are biased toward scientific packages which to include tests in the package, but we also package popular web/etc... packages.
For what it's worth, the Zope community has long been including tests inside packages (regardless of whether that's right or wrong). How the dependencies issue has been addressed has varied over time and depending on the authors of each package; sometimes tests_require is used, other times there's a [test] extra (as setuptools is universally accepted in the Zope community), and other times the test dependencies are just made package dependencies. Since zc.buildout is widely used in the Zope community, getting the right dependencies for the right processes has never been a significant issue. (And yes, most Zope packages use a src/ directory. Not to isolate the tests and the implementation, but to isolate the code from other files that may be present in the package.) -Fred -- Fred L. Drake, Jr. <fred at fdrake.net> "A storm broke loose in my mind." --Albert Einstein
On Tue, Oct 6, 2015 at 3:51 PM, Antoine Pitrou
The "src" convention is actually terrible when working with Python code, since suddenly you can't experiment easily on a VCS checkout, you have to do extra steps and/or write helper scripts for it.
The fact that few Python projects, including amongst the most popular projects, use that convention mean it's really not considered a good practice, nor convenient.
Convenience over correctness - I can understand that. Using virtualenvs (or tox) is not always something people want. People get attached to certain workflows, that's fine with me. But lets not confuse that with what's right. Just because it's popular doesn't mean it anywhere close to correct. It means it works, once in a while people hit some pitfalls, suffer for it but continue the same way. The same people then complain about "terrible" packaging experience in Python. I think we should look at this more meticulously, not solely through the perspective of what's popular. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Tue, Oct 6, 2015 at 8:34 AM, Ionel Cristian Mărieș
On Tue, Oct 6, 2015 at 3:13 PM, Antoine Pitrou
wrote: On Tue, 6 Oct 2015 07:07:31 -0400 Donald Stufft
wrote: I've never, in my entire life [...]
Can I suggest your entire life is an anecdotal data point here?
Make that two anecdotal data points :-)
Any number of things can be described as trivial depending on the skillset and patience of the user. When users report a bug, they are not expecting to be asked to download and "untar" stuff. Not every user is a programmer.
But seriously now, your arguments are also anecdotal. Lets not pretend we're objective here. That sort of attitude is disingenuous and will quickly devolve this discussion to mere ad hominems.
Okay, though, so maybe if there is nothing to offer here but anecdata then maybe we should stop acting like there's "one right way here". I have projects that install their test suite and test dependencies because it is frequently useful to ask users to run a self-test (and users themselves want to be able to do it for a variety of reasons). There are other projects where it doesn't make sense, and those don't have to install the tests (I still think in those cases that the tests should live in the package instead of outside it, but simply not installed). In any case, let's not get trapped into endless circular discussions about what is correct, period, and instead consider individual use cases--not dismissing individual projects' or peoples' experiences and needs--and discuss what the most appropriate action is for those uses cases. Python projects are not monolithic in their audiences (and that includes developer audiences and user audiences). Erik
On Tue, Oct 6, 2015 at 6:33 PM, Erik Bray
Okay, though, so maybe if there is nothing to offer here but anecdata then maybe we should stop acting like there's "one right way here". I have projects that install their test suite and test dependencies because it is frequently useful to ask users to run a self-test (and users themselves want to be able to do it for a variety of reasons).
There are other projects where it doesn't make sense, and those don't have to install the tests (I still think in those cases that the tests should live in the package instead of outside it, but simply not installed).
To be honest, I like the idea of providing the tests in the installed package. Using the test suite as a debugging/diagnostic tool is obviously desirable. It's just that it's unpractical for general use. Another anecdote I know, but humour these two concerns: * How to deal with dependencies? ** Should we use extras? Installing test deps has the disadvantage of version conflicts. Unless we make some sort of "virtualenv-for-tests" tool? ** Should we vendor the deps? But isn't that maybe too hard to do (or plain wrong for other reasons)? ** Should we avoid having deps? That can be too limiting in some situations, unittest is very bare compared to pytest or nose. * What sort of tests should be included? Integration tests are special case, if they need external services, temporary storage or what not. Would we then need to have clear separation for different types of tests? I'm not saying tests inside package is bad at all. But to flaunt it around as a "best practice" requires at least some recommendations for the two concerns illustrated above, and probably better tooling. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Tue, Oct 6, 2015 at 12:04 PM, Ionel Cristian Mărieș
On Tue, Oct 6, 2015 at 6:33 PM, Erik Bray
wrote: Okay, though, so maybe if there is nothing to offer here but anecdata then maybe we should stop acting like there's "one right way here". I have projects that install their test suite and test dependencies because it is frequently useful to ask users to run a self-test (and users themselves want to be able to do it for a variety of reasons).
There are other projects where it doesn't make sense, and those don't have to install the tests (I still think in those cases that the tests should live in the package instead of outside it, but simply not installed).
To be honest, I like the idea of providing the tests in the installed package. Using the test suite as a debugging/diagnostic tool is obviously desirable. It's just that it's unpractical for general use. Another anecdote I know, but humour these two concerns:
Those are all fair questions, and I can't say I have great answers for all of them. But that's a fair point that if installable tests *are* recommended as a practice then they should be addressed. Before continuing on there are really three options being discussed here: 1) Tests totally outside the package. 2) Tests in the package but not installed (only in source dists / VCS checkout) 3) Tests in the package and some or all installed. Skimming back through the rest of the thread I don't see too much support for 1). The only argument against it is the need for specifying dependencies, etc., which really only impacts developers so long as the tests aren't *installed*, I think. But there's also the question of what kinds of tests we're talking about. I think unit tests should live in the <packagename>.tests for a library. Other kinds of tests I don't have a strong opinion about. So in any case if we did recommend putting tests in a subpackage we could still do so without making a decree as to whether or not they should be installed. Maybe by default suggest that they not be installed, and only have them installable if the developer has a plan for addressing the questions below. I think each of these questions may have different answers depending on the needs of different projects.
* How to deal with dependencies?
What dependencies exactly? Dependencies just needed for running the tests (as opposed to running the code being tested? Some of my code has optional dependencies, and the tests for that code are only run if those optional dependencies are satisfied, but that's not what we're talking about right?)
** Should we use extras? Installing test deps has the disadvantage of version conflicts. Unless we make some sort of "virtualenv-for-tests" tool?
I'm rather fond of mktmpenv in virtualenv-wrapper, but I admit that may be out of scope here... Not to point to using easy_install as best practice, but something resembling the way setup_requires could actually work here--dependencies are downloaded into a temp dir and installed there as eggs; added to sys.path. No need for something as heavy-weight as a virtualenv. This still has some potential for VersionConflict errors if not handled carefully though. But I might try out something like this and see where I get with it, because I think it would be useful.
** Should we vendor the deps? But isn't that maybe too hard to do (or plain wrong for other reasons)? ** Should we avoid having deps? That can be too limiting in some situations, unittest is very bare compared to pytest or nose.
py.test comes with a command to generate a bundled copy of py.test for vendoring: https://pytest.org/latest/goodpractises.html#deprecated-create-a-pytest-stan... (for reasons I've lost track of it's apparently deprecated now though?) I've used this with quite a bit of success to support installed tests. It allows me to ship a version of py.test that works with my tests along with the package, and there is no issue with version conflicts or anything. In principle the same trick could be used to bundle other dependencies. In practice it's a little more complicated because downstream packagers don't like this, but it's easy to remove the bundled py.test and use the one provided by the system instead (as in on Debian). In that case we do have to work with the downstream packagers to make sure all the tests are working for them. I don't think this will be be much of an issue for the average pure-Python package.
* What sort of tests should be included? Integration tests are special case, if they need external services, temporary storage or what not. Would we then need to have clear separation for different types of tests?
I try to separate out tests like this and disable them by default. I think the answer to that question is going to vary a lot on a case by case basis, but it is a good question to pose to anyone considering having installed tests.
I'm not saying tests inside package is bad at all. But to flaunt it around as a "best practice" requires at least some recommendations for the two concerns illustrated above, and probably better tooling.
Fair point, I agree! Best, Erik
On Tue, Oct 6, 2015 at 11:54 PM, Erik Bray
Skimming back through the rest of the thread I don't see too much support for 1). The only argument against it is the need for specifying dependencies, etc., which really only impacts developers so long as the tests aren't *installed*, I think. But there's also the question of what kinds of tests we're talking about. I think unit tests should live in the <packagename>.tests for a library. Other kinds of tests I don't have a strong opinion about.
I think there's some confusion here. The pain point with "inside" tests is exactly the dependencies. And by dependencies I mean test dependencies, we're talking about tests here :-) If you install two packages with "inside" tests, then how do you deal with the version conflict of test dependencies? That's the big elephant in the room everyone is ignoring :) If you have to bend over backwards (to install the test dependencies) in order to run the installed tests then what's the point of installing them at all? It's far more safer to ask the user to just checkout the sources and run the tests from there. Why completely mess up user's site-packages just because you want to have this weird `python -mfoobar.tests` workflow? I like the idea, I really do. But it's not for everyone. I strongly feel that only projects that don't have any test dependencies should install the tests, or provide the tests inside the package. Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Wed, 7 Oct 2015 00:47:31 +0300
Ionel Cristian Mărieș
On Tue, Oct 6, 2015 at 11:54 PM, Erik Bray
wrote: Skimming back through the rest of the thread I don't see too much support for 1). The only argument against it is the need for specifying dependencies, etc., which really only impacts developers so long as the tests aren't *installed*, I think. But there's also the question of what kinds of tests we're talking about. I think unit tests should live in the <packagename>.tests for a library. Other kinds of tests I don't have a strong opinion about.
I think there's some confusion here. The pain point with "inside" tests is exactly the dependencies.
Is it your personal experience or some theoretical argument you're making?
If you install two packages with "inside" tests, then how do you deal with the version conflict of test dependencies?
Well, how do you deal with the version conflict of non-test dependencies? How are tests supposed to be a problem here, while they usually have so few dependencies of their own?
If you have to bend over backwards (to install the test dependencies)
While some packages may have non-trivial test dependencies, usual practice is for test suites to require the exact same dependencies as the rest of the package, + perhaps a test runner. Since we're talking about good practice for the average package, it's not very useful to point out that 0.1% of PyPI packages may have excruciatingly annoying test dependencies.
Why completely mess up user's site-packages just because you want to have this weird `python -mfoobar.tests` workflow?
Did you have such an experience or are you making this up for the sake of the argument? And just because you are not used to a "workflow" doesn't make it "weird" in any case. Python itself uses such a workflow ("python -m test"). Regards Antoine.
On 10/06/2015 04:04 PM, Antoine Pitrou wrote: [snip]
...How are tests supposed to be a problem here, while they usually have so few dependencies of their own?
If you have to bend over backwards (to install the test dependencies)
While some packages may have non-trivial test dependencies, usual practice is for test suites to require the exact same dependencies as the rest of the package, + perhaps a test runner.
Since we're talking about good practice for the average package, it's not very useful to point out that 0.1% of PyPI packages may have excruciatingly annoying test dependencies.
I think this discussion could probably do with fewer unsupported assertions about what is "usual" -- it's clear that experiences in different parts of the community vary widely. Speaking personally and anecdotally, I maintain 15 or so projects on PyPI, and every single one of them has at least three or four test-only dependencies; not just a test runner, but also testing utilities of one kind or another (e.g. the mock backport for Python 2). So my personal percentage of "packages with more than one test-only dependency" is not 0.1%, it's 100%. I don't have any idea what the real percentage is on PyPI, and wouldn't hazard a guess. I'm fairly sure you don't know either. Carl
On Tue, 6 Oct 2015 16:16:41 -0600
Carl Meyer
On 10/06/2015 04:04 PM, Antoine Pitrou wrote: [snip]
...How are tests supposed to be a problem here, while they usually have so few dependencies of their own?
If you have to bend over backwards (to install the test dependencies)
While some packages may have non-trivial test dependencies, usual practice is for test suites to require the exact same dependencies as the rest of the package, + perhaps a test runner.
Since we're talking about good practice for the average package, it's not very useful to point out that 0.1% of PyPI packages may have excruciatingly annoying test dependencies.
I think this discussion could probably do with fewer unsupported assertions about what is "usual" -- it's clear that experiences in different parts of the community vary widely.
Speaking personally and anecdotally, I maintain 15 or so projects on PyPI, and every single one of them has at least three or four test-only dependencies; not just a test runner, but also testing utilities of one kind or another (e.g. the mock backport for Python 2).
They're still trivial dependencies, though. Usually small or medium-sized pure Python packages with a rather stable API (especially stdlib backports, of course). I don't see how they could cause the kind of mess the OP claimed they would. So I'd still like to know what "bend over backwards" is supposed to mean here. Regards Antoine.
On Wed, Oct 7, 2015 at 1:04 AM, Antoine Pitrou
I think there's some confusion here. The pain point with "inside" tests is exactly the dependencies.
Is it your personal experience or some theoretical argument you're making?
I've published about 27 packages that have tests on PyPI. Out of those only 5 could run purely on stdlib unittest. That leaves me with 22 packages that need test tools like pytest/nose and assorted plugins.
If you install two packages with "inside" tests, then how do you deal with the version conflict of test dependencies?
Well, how do you deal with the version conflict of non-test dependencies? How are tests supposed to be a problem here, while they usually have so few dependencies of their own?
It's double the trouble to find compatible releases. Current tooling don't resolve conflicts automatically. Maybe it's something handled better in Conda for all I know but I don't use that.
If you have to bend over backwards (to install the test dependencies)
While some packages may have non-trivial test dependencies, usual practice is for test suites to require the exact same dependencies as the rest of the package, + perhaps a test runner.
Can't relate to that - definitely not `usual practice` from my perspective. Since we're talking about good practice for the average package, it's
not very useful to point out that 0.1% of PyPI packages may have excruciatingly annoying test dependencies.
Already feeling guilty ... I hope you're finally happy now :-)
Why completely mess up user's site-packages just because you want to have this weird `python -mfoobar.tests` workflow?
Did you have such an experience or are you making this up for the sake of the argument?
I got burnt by the pbr issue [1] once (cause mock has it as a run-time dependency). I don't normally use `mock` but in the circle of hell I live in someone else depended on it. I don't look forward to that happening again, and I don't want to litter my site-packages with useless test stuff. I already have too much stuff in there.
And just because you are not used to a "workflow" doesn't make it "weird" in any case. Python itself uses such a workflow ("python -m test").
It's weird in the sense that you have to do all these gymnastics to get the test dependencies right before running that. As I previously stated, I like the idea of `python -mfoobar.test` - just that dependencies and scope make it weird and impractival for *general* use. [1] https://bugs.launchpad.net/pbr/+bug/1483067 Thanks, -- Ionel Cristian Mărieș, http://blog.ionelmc.ro
On Wed, 7 Oct 2015 01:44:34 +0300
Ionel Cristian Mărieș
That leaves me with 22 packages that need test tools like pytest/nose and assorted plugins.
[...]
It's double the trouble to find compatible releases.
Hmm, are you saying py.test / nose and assorted plugins break APIs often? I would be a bit surprised, but I don't use them nowadays. Modern unittest is quite capable.
And just because you are not used to a "workflow" doesn't make it "weird" in any case. Python itself uses such a workflow ("python -m test").
It's weird in the sense that you have to do all these gymnastics to get the test dependencies right
Well, "getting the dependencies right" must be done, whether you run the tests from their installed location or from the source tree. In both cases, you can probably get the package management system to automate package downloads and installs. Rehards Antoine.
Am 06.10.2015 um 17:33 schrieb Erik Bray:
On Tue, Oct 6, 2015 at 8:34 AM, Ionel Cristian Mărieș
wrote: On Tue, Oct 6, 2015 at 3:13 PM, Antoine Pitrou
wrote: On Tue, 6 Oct 2015 07:07:31 -0400 Donald Stufft
wrote: I've never, in my entire life [...]
Can I suggest your entire life is an anecdotal data point here?
Make that two anecdotal data points :-)
Any number of things can be described as trivial depending on the skillset and patience of the user. When users report a bug, they are not expecting to be asked to download and "untar" stuff. Not every user is a programmer.
But seriously now, your arguments are also anecdotal. Lets not pretend we're objective here. That sort of attitude is disingenuous and will quickly devolve this discussion to mere ad hominems.
Okay, though, so maybe if there is nothing to offer here but anecdata then maybe we should stop acting like there's "one right way here". I have projects that install their test suite and test dependencies because it is frequently useful to ask users to run a self-test (and users themselves want to be able to do it for a variety of reasons).
There are other projects where it doesn't make sense, and those don't have to install the tests (I still think in those cases that the tests should live in the package instead of outside it, but simply not installed).
In any case, let's not get trapped into endless circular discussions about what is correct, period, and instead consider individual use cases--not dismissing individual projects' or peoples' experiences and needs--and discuss what the most appropriate action is for those uses cases. Python projects are not monolithic in their audiences (and that includes developer audiences and user audiences).
Yes, there is not generic "one right way here". Yes, let's consider individual use cases. My use case are the docs for new comers: - https://github.com/pypa/sampleproject - https://packaging.python.org/en/latest/distributing/ That's why started the thread. New comers don't have the experience you have. New comers want to get their coding done. They need a simple advice: "If unsure, then do X" And for this question, I want a "one right way here". Up to now it looks like there is no consensus. Regards, Thomas Güttler -- http://www.thomas-guettler.de/
On Tue, Oct 6, 2015 at 10:38 PM, Thomas Güttler < guettliml@thomas-guettler.de> wrote:
Yes, there is not generic "one right way here".
Yes, let's consider individual use cases.
My use case are the docs for new comers:
- https://github.com/pypa/sampleproject - https://packaging.python.org/en/latest/distributing/
That's why started the thread.
unfortunately, that isn't a use-case -- every newcomer has a different use case. I was happy to see this thread, because I thought maybe I"d learn what i should teach my students - new to python. But alas - there clearly really is no consensus. What i've told newbies in the past is somethig like: """ if you want your user to be able to install you package, and then run something like: import my_package my_package.test() then put your tests inside the package. If you are fine with only being able to run the tests from the source tree -- then put your tests outside the package. """ but really, newbies have no idea how to make this decsion. Maybe we could come up with a decision tree for this -- some guidance for knowing what to do, when? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On Wed, Oct 7, 2015 at 2:11 AM, Chris Barker
On Tue, Oct 6, 2015 at 10:38 PM, Thomas Güttler
wrote: Yes, there is not generic "one right way here".
Yes, let's consider individual use cases.
My use case are the docs for new comers:
- https://github.com/pypa/sampleproject - https://packaging.python.org/en/latest/distributing/
That's why started the thread.
unfortunately, that isn't a use-case -- every newcomer has a different use case.
Indeed--I've helped newcomers whose very first attempt at packaging Python code includes Cython code for simulations. I think even for newcomers what we should be providing is not a "this is the way to do it" because then they get confused when that way doesn't work for them. Better, in the long term (and I'm happy to contribute to such an effort if it will help) is to provide a sort of Choose Your Own Adventure story. It can't all go on one page because that would be a mess, but a sort of "If you need to do this, read this. If you need to do this, read this. Now if you need to include some data files that are installed in your package read on, because there's really only one right way to do that. But now you have some options if you want to include tests: ..."
I was happy to see this thread, because I thought maybe I"d learn what i should teach my students - new to python.
But alas - there clearly really is no consensus.
What i've told newbies in the past is somethig like:
""" if you want your user to be able to install you package, and then run something like:
import my_package my_package.test()
then put your tests inside the package.
If you are fine with only being able to run the tests from the source tree -- then put your tests outside the package. """
but really, newbies have no idea how to make this decsion.
Maybe we could come up with a decision tree for this -- some guidance for knowing what to do, when?
Exactly. I think it could even be fun :) How could we get started to add something like this to the packaging docs? Erik
I was happy to see this thread, because I thought maybe I"d learn what i should teach my students - new to python.
Maybe we could come up with a decision tree for this -- some guidance for knowing what to do, when?
Exactly. I think it could even be fun :)
How could we get started to add something like this to the packaging docs?
in case you don't know the project for packaging.python.org is here: https://github.com/pypa/python-packaging-user-guide in general, we're trying to maintain 2 simple guides for installing and distributing... and then there's a somewhat open "Additional Topics" section for tutorials or more "advanced" topics.
On Oct 06, 2015, at 09:51 AM, Antoine Pitrou wrote:
The PyP"A" should definitely fix its sample project to reflect good practices.
Here's my own sample project. There are actually two git branches, one with an extension module and one with pure Python. https://gitlab.com/warsaw/stupid Cheers, -Barry
On Tue, Oct 06, 2015 at 09:51:01AM +0200, Antoine Pitrou wrote:
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
As Donald mentioned, this doesn't work in the general case since many packages ship quite substantial test data around that often doesn't end up installed, and in other cases since the package requires significant fixture setup or external resources (e.g. running SQLAlchemy tests without a working database server would be meaningless). The option of always shipping test data as a standard part of a package in a vein attempt to always ensure it can be tested (which is not always likely given the SQLAlchemy example above) strikes me as incredibly wasteful, not from some oh-precious-bytes standpoint, but from the perspective of distributing a Python application of any size where the effects of always shipping half-configured test suites has increased the resulting distribution size potentially by 3 or 4x. https://github.com/bennoleslie/pexif is the first hit on Google for a module I thought would need some test data. It's actually quite minimally tested, yet already the tests + data are 3.6x the size of the module itself. I appreciate arguments for inlining tests alongside a package in order to allow reuse of the suite's functionality by consuming applications' test suites, but as above, in the general case this simply isn't something that will always work and can be relied on by default. Is there perhaps a third option that was absent from the original post? e.g. organizing tests in a separate, optional, potentially pip-installable package.
outside the module like this:
There is no actual reason to do that except win a couple kilobytes if you are distributing your package on floppy disks for consumption on Z80-based machines with 64KB RAM.
Even Python *itself* puts its test suite inside the standard library, not outside it (though some Linux distros may strip it away). Try "python -m test.regrtest" (again, this may fail if your distro decided to ship the test suite in a separate package).
The PyP"A" should definitely fix its sample project to reflect good practices.
Regards
Antoine.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
At the pylons project we've had a history of keeping our tests inside the
packages. However, keeping them outside has proven to be nicer in some
projects as well. 1) It reduces the size of the binary wheels that do not
need to package the tests. 2) It still allows you to run the tests on an
arbitrary installation if you want to by pulling down the repo and running
them against the version installed in the virtualenv. Distributing the
tests to every single installation is definitely not a requirement in order
to be able to run them for people who want to run them, and decoupling them
can really help with that. Projects that are shipping tests inside the
package may have them removed upstream and then it is very difficult to run
the suite against some binary distribution.
On Tue, Oct 6, 2015 at 6:14 PM, David Wilson
On Tue, Oct 06, 2015 at 09:51:01AM +0200, Antoine Pitrou wrote:
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
As Donald mentioned, this doesn't work in the general case since many packages ship quite substantial test data around that often doesn't end up installed, and in other cases since the package requires significant fixture setup or external resources (e.g. running SQLAlchemy tests without a working database server would be meaningless).
The option of always shipping test data as a standard part of a package in a vein attempt to always ensure it can be tested (which is not always likely given the SQLAlchemy example above) strikes me as incredibly wasteful, not from some oh-precious-bytes standpoint, but from the perspective of distributing a Python application of any size where the effects of always shipping half-configured test suites has increased the resulting distribution size potentially by 3 or 4x.
https://github.com/bennoleslie/pexif is the first hit on Google for a module I thought would need some test data. It's actually quite minimally tested, yet already the tests + data are 3.6x the size of the module itself.
I appreciate arguments for inlining tests alongside a package in order to allow reuse of the suite's functionality by consuming applications' test suites, but as above, in the general case this simply isn't something that will always work and can be relied on by default.
Is there perhaps a third option that was absent from the original post? e.g. organizing tests in a separate, optional, potentially pip-installable package.
outside the module like this:
There is no actual reason to do that except win a couple kilobytes if you are distributing your package on floppy disks for consumption on Z80-based machines with 64KB RAM.
Even Python *itself* puts its test suite inside the standard library, not outside it (though some Linux distros may strip it away). Try "python -m test.regrtest" (again, this may fail if your distro decided to ship the test suite in a separate package).
The PyP"A" should definitely fix its sample project to reflect good practices.
Regards
Antoine.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Also having test code in the package can be very painful if you use tools like venusian which scan and try to import all Python files. Sent from my iPhone
On 09 Oct 2015, at 16:05, Michael Merickel
wrote: At the pylons project we've had a history of keeping our tests inside the packages. However, keeping them outside has proven to be nicer in some projects as well. 1) It reduces the size of the binary wheels that do not need to package the tests. 2) It still allows you to run the tests on an arbitrary installation if you want to by pulling down the repo and running them against the version installed in the virtualenv. Distributing the tests to every single installation is definitely not a requirement in order to be able to run them for people who want to run them, and decoupling them can really help with that. Projects that are shipping tests inside the package may have them removed upstream and then it is very difficult to run the suite against some binary distribution.
On Tue, Oct 6, 2015 at 6:14 PM, David Wilson
wrote: On Tue, Oct 06, 2015 at 09:51:01AM +0200, Antoine Pitrou wrote: They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
As Donald mentioned, this doesn't work in the general case since many packages ship quite substantial test data around that often doesn't end up installed, and in other cases since the package requires significant fixture setup or external resources (e.g. running SQLAlchemy tests without a working database server would be meaningless).
The option of always shipping test data as a standard part of a package in a vein attempt to always ensure it can be tested (which is not always likely given the SQLAlchemy example above) strikes me as incredibly wasteful, not from some oh-precious-bytes standpoint, but from the perspective of distributing a Python application of any size where the effects of always shipping half-configured test suites has increased the resulting distribution size potentially by 3 or 4x.
https://github.com/bennoleslie/pexif is the first hit on Google for a module I thought would need some test data. It's actually quite minimally tested, yet already the tests + data are 3.6x the size of the module itself.
I appreciate arguments for inlining tests alongside a package in order to allow reuse of the suite's functionality by consuming applications' test suites, but as above, in the general case this simply isn't something that will always work and can be relied on by default.
Is there perhaps a third option that was absent from the original post? e.g. organizing tests in a separate, optional, potentially pip-installable package.
outside the module like this:
There is no actual reason to do that except win a couple kilobytes if you are distributing your package on floppy disks for consumption on Z80-based machines with 64KB RAM.
Even Python *itself* puts its test suite inside the standard library, not outside it (though some Linux distros may strip it away). Try "python -m test.regrtest" (again, this may fail if your distro decided to ship the test suite in a separate package).
The PyP"A" should definitely fix its sample project to reflect good practices.
Regards
Antoine.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Am 10.10.2015 um 10:47 schrieb Wichert Akkerman:
Also having test code in the package can be very painful if you use tools like venusian which scan and try to import all Python files.
Hi Wichert Akkerman, can you please explain this pain? Regards, Thomas Güttler -- http://www.thomas-guettler.de/
On 10 Oct 2015, at 13:06, Thomas Güttler
wrote: Am 10.10.2015 um 10:47 schrieb Wichert Akkerman:
Also having test code in the package can be very painful if you use tools like venusian which scan and try to import all Python files.
Hi Wichert Akkerman,
can you please explain this pain?
Importing tests often leads to problems for two reasons: 1) tests try to import things that are not normally installed (mock, pytest, redis-mock, etc.) which breaks application startup, and 2) some tests have import side-effects which can be fine for testing purposes, but should never trigger during normal usage. Wichert.
On 10 October 2015 at 21:47, Wichert Akkerman
Also having test code in the package can be very painful if you use tools like venusian which scan and try to import all Python files.
Even if tests are not in-package, such tools need to cope with extras
in general: any optionally-installed dependencies will have this
issue. Venusian offers a callback on import errors -
http://venusian.readthedocs.org/en/latest/#onerror-scan-callback.
-Rob
--
Robert Collins
Am 07.10.2015 um 01:14 schrieb David Wilson:
On Tue, Oct 06, 2015 at 09:51:01AM +0200, Antoine Pitrou wrote:
They should be inside the module. That way, you can check an installed module is ok by running e.g. "python -m mypackage.tests". Any other choice makes testing installed modules more cumbersome.
As Donald mentioned, this doesn't work in the general case since many packages ship quite substantial test data around that often doesn't end up installed, and in other cases since the package requires significant fixture setup or external resources (e.g. running SQLAlchemy tests without a working database server would be meaningless).
Should work with a temporary sqlite db.
The option of always shipping test data as a standard part of a package in a vein attempt to always ensure it can be tested (which is not always likely given the SQLAlchemy example above) strikes me as incredibly wasteful, not from some oh-precious-bytes standpoint, but from the perspective of distributing a Python application of any size where the effects of always shipping half-configured test suites has increased the resulting distribution size potentially by 3 or 4x.
https://github.com/bennoleslie/pexif is the first hit on Google for a module I thought would need some test data. It's actually quite minimally tested, yet already the tests + data are 3.6x the size of the module itself.
I appreciate arguments for inlining tests alongside a package in order to allow reuse of the suite's functionality by consuming applications' test suites, but as above, in the general case this simply isn't something that will always work and can be relied on by default.
Is there perhaps a third option that was absent from the original post? e.g. organizing tests in a separate, optional, potentially pip-installable package.
Yes, this third way is plausible. I guess there is even a fourth way. The question remains: If a new comer asks you "How to package my python code and its tests?", there should be one default answer which works for 80% of all cases. I think the confusion gets worse by creating new public accessible repos which explain "Hey that's my way to package stupid simple python code". Regards, Thomas Güttler -- http://www.thomas-guettler.de/
On Fri, Oct 9, 2015 at 10:44 AM, Thomas Güttler < guettliml@thomas-guettler.de> wrote:
The question remains: If a new comer asks you "How to package my python code and its tests?", there should be one default answer which works for 80% of all cases.
Should be, maybe -- but clearly there is no consensus as to what the "one" answer shoudl be. But I _think_ there are two answers: 1) inside the package, so it can be installed and the tests run on the installed version. 2) outside the package, so that potentially large or complex test requirements are not required for installation. So the intro docs can lay out those two options, with a bit of text that helps a newbie make a decision. from above, -- I think that EITHER option would work fine for 80% of cases. which makes me think -- why not pick one as the default while clearly documenting the other. And despite the fact that I have always used option (2) in my work, I think, if there is going to be one in the simple example, it should be (1) -- it's a fine way to get started, and users can move their tests outside of the package later if they start getting big. As a package grows to that extent, it will likely need other structural changes anyway. -Chris
I think the confusion gets worse by creating new public accessible repos which explain "Hey that's my way to package stupid simple python code".
Regards, Thomas Güttler
-- http://www.thomas-guettler.de/ _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On 6 October 2015 at 20:07, Thomas Güttler
Hi,
Where should I put tests when packaging python modules?
I want a "cowpath", an "obvious way"
Dear experts, please decide:
inside the module like this answer:
http://stackoverflow.com/questions/5341006/where-should-i-put-tests-when-pac...
XOR
outside the module like this:
https://github.com/pypa/sampleproject/tree/master/tests
I think there is no need to hurry. Let's wait one week, and then check which one is preferred.
My preference is to have the tests clearly namespaced to the project.
That can be done one of three ways:
./projectpackage/[zero-or-more-dirs-deep]/tests
./projectname_tests/
./tests with a runner configuration to add appropriate metadata
The namespacing lets me gather up test output from multiple projects
and not have it confused (nor have to artificially separate out what
tests ran in what test process).
This is mainly useful when taking a big-data mindset to your tests, so
not relevant so much to 'heres my 1k loc project, enjoy world'
scenarios.
-Rob
--
Robert Collins
participants (21)
-
Antoine Pitrou
-
Barry Warsaw
-
Ben Finney
-
Carl Meyer
-
Chris Barker
-
David Cournapeau
-
David Wilson
-
Donald Stufft
-
Erik Bray
-
Fred Drake
-
Glyph Lefkowitz
-
Ionel Cristian Mărieș
-
Jeremy Stanley
-
Marcus Smith
-
Marius Gedminas
-
Michael Merickel
-
Paul Moore
-
Robert Collins
-
Thomas Güttler
-
Wes Turner
-
Wichert Akkerman