[pytest-dev] another "i have had it moment" and a stern request ro review our interaction with backward compatibility
opensource at ronnypfannschmidt.de
Wed Nov 7 04:11:42 EST 2018
a few days i was trying to replicate an issue in the python shell, and
quickly realized, that outside of actually doing a complete pytest.main
invocation there is simply no practical and sane way to get a config
object that is not in a utterly broken unusable state to begin with.
Again and again i keep running into things that have deep and nastyly
hidden dependencies that are seemingly impossible to resolve.
And we get clear resutls from that - again and again certain features
simply cant be implemented or fearlessly delivered as the shoddy house
of cards falls appart with spooky actions at a distance.
Be it marks that smear over class hierachies or fixture definitions
that blow up for undiscoverable reasons when trying to introduce the
multi scope fixture system.
Again and again many of us run into massive impediments when trying to
develop features simply for strange actions at distance.
This is an accumulative cost in many ways - in particular for a project
driven primarily by volunteers - every hour we sacrafice to this altar
of technical debt is an hour lost for improving the project and/or
working on actual features.
>From my perspective this issue directly grows out of driving a large
part of pytests development by examples and tests, but precluding
stepping back and doing actual design.
The history of marks makes a perfect example of such an horror storry.
Each iteration adding a new minimal element while adding a huge factor
to the structural error as it grew.
I really want to hammer down there that "minimal" is very different
from "minimal viable" - leaving design uncontested for too long is an
ramp up for unsustainable technical debt.
Its aslo critical to note that for **me** there is a inherent
dishonesty in trying to stay backward compatible and not putting
forward a design and development process that deeply supports it -
right now we can observe a state of fragility from pytest where it
feels like every feature release triggers some regression - while at
the same time we keep and keep shipping features that are structurally
and fundamentally broken since years (yield tests, config init, ...).
This setup, which gives me the impression it is "designed" to make the
project fail its user (aka its broken to begin with and now it will
break some more), is a massive emotional drain. It painfully twists
around the sense of responsibility simply due to the perceived inherent
base-level of failure - and **I** want to do and feel better.
However with the current flow of releases and with our backward
compatibility policies in place i am under a very general impression
that i cant really do anything about it, and even if i try it would
drag on for years to generate even basic positive results - this
impression is a killer to my motivation - as it **feels** like it would
take years to even get a initial iteration out to the users - and the
process of fixing the issues in general would drag on over not only
years - but decades.
A timeflow like that would commpletely disconnect me from perceptions
of reward or archivement - in turn making it even less likely to
succeed or even start.
This is a open source project that volunteer driven - fixing deeper
issues should connect to feeling an archivement - but with out current
setup what seems to be in for **me** is more like dread, fear and
depression - a general defeat.
Thats not something i want to allow anymore.
**I** want to feel good about my work on pytest.
**I** want to feel the archivements i have on pytest.
and because of that
**I** have to make sure the project can support that.
So **I** want to invite all of you,
Lets make pytest a project again where
* we claim suport for backward compatibility and our actions and
results show for it
* we can enjoy the act of adding new features and enhancing the
* we can have better and quicker feedback, not just from the fabulous
people that work on it - but also the users.
**I** strongly beleive that this is doable, but it will require cutting
some bad ends.
We need to enhance layering, it doesnt have to be perfect but we do
have to be able to keep things appart sanely and we need to have
objects in valid states per default (invalid states should only be
possible deliberately, not accidentially)
We need to talk/write about design more - the difference between
minimal and viable after all only shows itself in the heat and fruction
of different perspectives and oppinions clashing.
We need a conversation and interaction with our advanced users, so they
dont end up on old and dead pytest versions (like pypy for example).
We need a pervasive chain reason to bring trough the period of
papercuts where we do shift around layering so the ecosystem can keep
up (saving the structural integrity of pytest at the expense of
destroying the ecosystem would be a dissaster).
of course there is a obligatory white on red "make pytest great again"
cap in planning ;P
More information about the pytest-dev