On Mon, Feb 22, 2021 at 8:19 AM email@example.com wrote:
There are zero technical reasons for what you are planning here.
Multiple core developers explained how it's a maintenance burden. It has been explained in multiple different ways.
You are inflating a few lines of autoconf into a "platform support", so you have a reason to justify adding multiple lines of extra autoconf codes to make life for downstream distributions harder.
"Making life harder" sounds to me like oh, maybe supporting one additional platform is not free and comes with a cost. This cost is something called the "maintenance burden".
My question is if Python wants to pay this cost, or if we want transfering the maintenance burden to people who actually care about these legacy platforms and architectures.
Your position is: Python must pay this price. My position is: Python should not.
Honestly, if it's just a few lines, it will be trivial for you to maintain a downstream patch and I'm not sure why we even need this conversation. If it's more than a few lines, well, again, we come back to the problem of the real maintenance burden.
The thing is you made assumptions about how downstream distributions use Python without doing some research first ("16-bit m68k-linux").
I'm talking about 16-bit memory alignment which causes SIGBUS if it's not respected on m68k. For example, unicodeobject.c requires special code just for this arch:
/* * Issue #17237: m68k is a bit different from most architectures in * that objects do not use "natural alignment" - for example, int and * long are only aligned at 2-byte boundaries. Therefore the assert() * won't work; also, tests have shown that skipping the "optimised * version" will even speed up m68k. */ #if !defined(__m68k__) (...)
Such issue is hard to guess when you write code and usually only spot it while actually running the code on such architecture.