On 14 Dec, 2012, at 8:27, "Gregory P. Smith" <greg@krypto.org> wrote:


On Mon, Dec 10, 2012 at 11:16 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
On Tue, 11 Dec 2012 03:05:19 +0100 (CET)
gregory.p.smith <python-checkins@python.org> wrote:
>   Using 'long double' to force this structure to be worst case aligned is no
> longer required as of Python 2.5+ when the gc_refs changed from an int (4
> bytes) to a Py_ssize_t (8 bytes) as the minimum size is 16 bytes.
>
> The use of a 'long double' triggered a warning by Clang trunk's
> Undefined-Behavior Sanitizer as on many platforms a long double requires
> 16-byte alignment but the Python memory allocator only guarantees 8 byte
> alignment.
>
> So our code would allocate and use these structures with technically improper
> alignment.  Though it didn't matter since the 'dummy' field is never used.
> This silences that warning.
>
> Spelunking into code history, the double was added in 2001 to force better
> alignment on some platforms and changed to a long double in 2002 to appease
> Tru64.  That issue should no loner be present since the upgrade from int to
> Py_ssize_t where the minimum structure size increased to 16 (unless anyone
> knows of a platform where ssize_t is 4 bytes?)

What?? Every 32-bit platform has a 4 bytes ssize_t (and size_t).

No they don't.

size_t and ssize_t exist in large part because they are often larger than an int or long on 32bit platforms.  They are 64-bit on Linux regardless of platform (i think there is a way to force a compile in ancient mode that forces them and the APIs being used to be 32-bit size_t variants but nobody does that).

Are you sure about this, what you describe seem to be loff_t (the typedef for file offsets), not size_t (the typedef for sizes of memory blocks). The size of memory blocks is limited to a 32-bit number on 32-bit systems (for the obvious reason).

Size_t is 32-bit on at least some platforms:

$ cat t.c
#include <stdlib.h>
#include <stdio.h>

int main(void)
{
printf("sizeof(size_t): %d\n", (int)sizeof(size_t));
return 0;
}

$ cc -o t t.c -arch i386
$ ./t
sizeof(size_t): 4


This session is on an OSX system, but you'll get the same output on a 32-bit linux system with default compiler settings (I've tested this on a SLES10 system).


> We can probably get rid of the double and this union hack all together today.
> That is a slightly more invasive change that can be left for later.

How do you suggest to get rid of it? Some platforms still have strict
alignment rules and we must enforce that PyObjects (*) are always
aligned to the largest possible alignment, since a PyObject-derived
struct can hold arbitrary C types.

(*) GC-enabled PyObjects, anyway. Others will be naturally aligned
thanks to the memory allocator.


What's more, I think you shouldn't be doing this kind of change in a
bugfix release. It might break compiled C extensions since you are
changing some characteristics of object layout (although you would
probably only break those extensions which access the GC header, which
is probably not many of them). Resource consumption improvements
generally go only into the next feature release.

This isn't a resource consumption improvement.  It is a compilation correctness change with zero impact on the generated code or ABI compatibility before and after.  The structure, as defined, is was flagged as problematic by Clang's undefined behavior sanitizer because it contains a 'long double' which requires 16-byte alignment but Python's own memory allocator was using an 8 byte boundary.

So changing the definition of the dummy side of the union makes zero difference to already compiled code as it (a) doesn't change the structure's size and (b) all existing implementations already align these on an 8 byte boundary.

-gps

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com