[pytest-dev] another "i have had it moment" and a stern request ro review our interaction with backward compatibility
oliver at bestwalter.de
Wed Nov 7 16:34:09 EST 2018
I'd like to add my 2 cents from the perspective of an enthusiastic pytest
user, who dabbles in a lot of projects using pytest and who occasionally
I use pytest since 2010 and can still remember a time when I was able to
trigger the dreaded <INTERNALERROR> quite regularly and it was often really
hard for me to figure out what went wrong, leading to arcane workarounds in
my test suites.
Since I am a bit involved in the project and pay more attention, I remember
* one incident where I triggered an INTERNAL error during normal usage in a
fresh release and the problem was easy to spot and fix (
* one breaking change that was also easy to fix: the change in the logging
behaviour and that was also perfectly o.k. because bottom line is that the
logging behaviour overall now is far better because of this
* one hard to reproduce race condition due to the introduction of tmp_path
(thanks for that btw), which also was fixed faster than I was even able to
find some time to have a closer look (
All in all pretty painless and easy to work around and usually fixed really
In my experience the subset of functionality/plugins that by far the most
test suites use are rock solid, have a good API and hardly ever break.
Also: the quality of the "user experience" and the documentation has
improved vastly in the last few years.
Just today I had the chance to pair program some tests to introduce a
colleague to pytest. Seeing the expression of delight in their face, when
they realized how easy it is to get started and when they started to
comprehend the power of fixtures and parametrization is priceless.
Long story short: you folks are amazing and pytest is something to be
extremely proud of despite all the pain that is always involved when trying
to pay off technical debt without breaking the word.
The release automation that is in place now makes more frequent releases
easier and from my own experience I agree with Brunos suggestion to make
even more frequent releases that contain smaller changesets and also to not
shy away from more frequent major releases, if that makes the transition
Thanks that you care and thanks for all your work
On Wed, 7 Nov 2018 at 20:56 Bruno Oliveira <nicoddemus at gmail.com> wrote:
> Hi Ronny,
> On Wed, Nov 7, 2018 at 7:12 AM Ronny Pfannschmidt <
> opensource at ronnypfannschmidt.de> wrote:
>> This is an accumulative cost in many ways - in particular for a project
>> driven primarily by volunteers - every hour we sacrafice to this altar
>> of technical debt is an hour lost for improving the project and/or
>> working on actual features.
> I feel your frustration. I agree that the pytest codebase needs some
> refactorings in order for it to be maintainable and allow us to move
> From my perspective this issue directly grows out of driving a large
>> part of pytests development by examples and tests, but precluding
>> stepping back and doing actual design.
>> The history of marks makes a perfect example of such an horror storry.
>> Each iteration adding a new minimal element while adding a huge factor
>> to the structural error as it grew.
> What I think often happens is that we can't foresee that a minimal
> increment might lead to a large technical debt in the future. Once we put
> something in the open, we try very hard to not change that behavior (even
> if considered incorrect now), which is the main point of your email.
>> I really want to hammer down there that "minimal" is very different
>> from "minimal viable" - leaving design uncontested for too long is an
>> ramp up for unsustainable technical debt.
>> Its aslo critical to note that for **me** there is a inherent
>> dishonesty in trying to stay backward compatible and not putting
>> forward a design and development process that deeply supports it -
>> right now we can observe a state of fragility from pytest where it
>> feels like every feature release triggers some regression - while at
>> the same time we keep and keep shipping features that are structurally
>> and fundamentally broken since years (yield tests, config init, ...).
> I agree about the fact that every feature release we end up breaking
> something unintentionally. Of course that sometimes will happen, but I feel
> that happens more often in the murky areas of the code which have grown
> organically over the years.
> This setup, which gives me the impression it is "designed" to make the
>> project fail its user (aka its broken to begin with and now it will
>> break some more), is a massive emotional drain. It painfully twists
>> around the sense of responsibility simply due to the perceived inherent
>> base-level of failure - and **I** want to do and feel better.
>> However with the current flow of releases and with our backward
>> compatibility policies in place i am under a very general impression
>> that i cant really do anything about it, and even if i try it would
>> drag on for years to generate even basic positive results - this
>> impression is a killer to my motivation - as it **feels** like it would
>> take years to even get a initial iteration out to the users - and the
>> process of fixing the issues in general would drag on over not only
>> years - but decades.
>> A timeflow like that would commpletely disconnect me from perceptions
>> of reward or archivement - in turn making it even less likely to
>> succeed or even start.
> About our backward compatibility policies, we don't have any minimal time
> restriction for major releases, we only state that we will issue
> deprecation warnings for two feature releases before actually
> changing/removing a feature.
> Having said that, I see that it is possible for us to have major releases
> more often (a few per year even). As we have learned already, frequent
> releases which cause few incompatibilities are preferred over infrequent
> releases which cause a lot of incompatibilities, as it will affect less
> users and makes it easy to pin point problems.
> My gut feeling is that those backward incompatible releases won't be that
> bad in the end: we only change little used features or porting the code to
> the new way of doing things is easy to apply to existing code. Of course
> some friction always happen.
> Dragging deprecated features over many releases, specially if they get in
> the way of new implementations, is a certain way to needlessly increase the
> burden of us maintainers, as you point out.
>> This is a open source project that volunteer driven - fixing deeper
>> issues should connect to feeling an archivement - but with out current
>> setup what seems to be in for **me** is more like dread, fear and
>> depression - a general defeat.
>> Thats not something i want to allow anymore.
>> **I** want to feel good about my work on pytest.
>> **I** want to feel the archivements i have on pytest.
>> and because of that
>> **I** have to make sure the project can support that.
>> So **I** want to invite all of you,
>> Lets make pytest a project again where
>> * we claim suport for backward compatibility and our actions and
>> results show for it
>> * we can enjoy the act of adding new features and enhancing the
>> * we can have better and quicker feedback, not just from the fabulous
>> people that work on it - but also the users.
>> **I** strongly beleive that this is doable, but it will require cutting
>> some bad ends.
>> We need to enhance layering, it doesnt have to be perfect but we do
>> have to be able to keep things appart sanely and we need to have
>> objects in valid states per default (invalid states should only be
>> possible deliberately, not accidentially)
>> We need to talk/write about design more - the difference between
>> minimal and viable after all only shows itself in the heat and fruction
>> of different perspectives and oppinions clashing.
>> We need a conversation and interaction with our advanced users, so they
>> dont end up on old and dead pytest versions (like pypy for example).
>> We need a pervasive chain reason to bring trough the period of
>> papercuts where we do shift around layering so the ecosystem can keep
>> up (saving the structural integrity of pytest at the expense of
>> destroying the ecosystem would be a dissaster).
> I agree with the points above Ronny, let's make it happen. :)
> pytest-dev mailing list
> pytest-dev at python.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the pytest-dev