![](https://secure.gravatar.com/avatar/1efc90ff6075b7654d8a8ce6e51a2cd3.jpg?s=120&d=mm&r=g)
"Phillip J. Eby" <pje@telecommunity.com> writes:
thinking was that the pools are aligned on a known size boundary (e.g. 4K) so to get to the head you just mask off the 12 (or whatever) least significant bits.
Ah. But since even the most trivial of Python operations require access to the type, wouldn't this take longer? I mean, for every ob->ob_type->tp_whatever you'll now have something like *(ob & mask)->tp_whatever.
Well, I dunno. I doubt the masking would add significant overhead -- it'd only be one instruction, after all -- but the fact that you'd have to haul the start of the pool into the cache to get the pointer to the type object might hurt. You'd have to try it and measure, I guess.
So there are still two memory acesses, but now there's a bitmasking operation added in. I suppose that for some object types you could be getting a 12-25% decrease in memory use for the base object, though.
More than that in the good cases. Something I forgot was that you'd probably have to knock variable length types on the head. Cheers, mwh -- I would hereby duly point you at the website for the current pedal powered submarine world underwater speed record, except I've lost the URL. -- Callas, cam.misc