Rationale for core Python numeric types
nospam at here.com
Fri Jul 16 16:49:28 CEST 2004
Thanks for the reply. Let me add before adding some more detailed
comments that I have used Python from time to time in the past, but
not regularly. I'm thinking seriously of implementing a pretty large
project with Python, involving 3D rendering and databasing, but I have
some concern that the large amounts of binary data that will be tossed
around imply that I'll end up implementing everything but a little
glue in C. I don't want, in particular, to find that the language is
evolving away from what I would consider to be a useful state.
On Fri, 16 Jul 2004 14:27:37 +0100, Peter Hickman
<peter at semantico.com> wrote:
>I'll try to give this a shot. The data types you talk about are machine types,
>int long, unsigned long long, float, double, unsigned char - they all exist as a
>function of the hardware that they run on. When you are programming in a
>structured macro assembler such as C (and to some extent C++) then these things
>are important, you can optimise both speed and storage by selecting the correct
>type. Then you port your code and the two byte int turns into four bytes,
>structures change size as they data aligns itself to even (or is it odd) word
>boundaries - like it does on the m68k.
>Python, along with other languages of this class, abstract away from that. You
>have integers and floats and the language handles all the details. You just
>write the code.
Abstraction is necessary, but the various numerical types you cite
-are- abstractions. In-real-life, various CPUs implement 'two-byte
integers' with various particular bit- and byte- orders-- but those
differences are abstracted away in any modern language.
In fact, since Python compiles to bytecode, there is an entirely
concrete, non-notional 'Python machine' that underlies the language.
This machine -could- have any collection whatever of numerical types,
as specific or as abstract as desired. My question is 'What is the
model' in Python? Is the model for Python, in some vague sense,
'linguistic' rather than 'numerical' or 'bit-twiddly'?
>If you look at the development of C from it's roots in B you will see that all
>these variants of integers and floats was just to get a correct mapping to the
>facilities supplied by the hardware and as long as languages were just glorified
>assembler then to get things to work you needed this menagerie of types.
But B evolved to standard C which, somewhat notoriously, takes a
different approach. C declines to say, for example, exactly what
'short' and 'long' mean-- specifying constraints instead-- e.g.,
'short' is not longer than 'long'. It's a compromise, but it works.
My question is where Python is headed. I'm wary of purists telling me
what I need and don't need. There's a lot of middle ground between
machine language and, say, e.g., a dead and unlamented computer
language such as FORTH, where you could get into arguments with FORTH
aficionados about whether anyone -really- needs floating point.
There is no virtue in believing something that can be proved to be true.
More information about the Python-list