learnpython.org - an online interactive Python tutorial

Steven D'Aprano steve+comp.lang.python at pearwood.info
Sun Apr 24 04:13:05 EDT 2011

On Sun, 24 Apr 2011 11:35:28 +1000, Chris Angelico wrote:

> On Sun, Apr 24, 2011 at 10:42 AM, Steven D'Aprano
> <steve+comp.lang.python at pearwood.info> wrote:
>> This is much like my experience with Apple's Hypertalk, where the only
>> data structure is a string. I'm very fond of Hypertalk, but it is
>> hardly designed with machine efficiency in mind. If you think Python is
>> slow now, imagine how slow it would be if every expression had to be
>> converted from a number back into a string, and vice versa, after every
>> operation:
>> x = str(int("1") + int("2"))
>> y = str(int("9")/int("3"))
>> z = str(int(x) - int(y))
>> flag = str(int(z) == int("0"))
>> only implicitly, by the interpreter.
> Except that it wouldn't bother with a native integer implementation,
> would it? With a string-is-bignum system, it could simply do the
> arithmetic on the string itself, with no conversions at all.

I can assure you that Hypertalk had no BigNum system. This was in the 
days of Apple Mac when a standard int was 16 bits, although Hypertalk 
used 32 bit signed long ints.

But a text-string based bignum would be quite inefficient. Consider the 
relatively small number 256**100. Written out in a string it requires 240 
digits, with one byte per digit that's 240 bytes. Python stores longints 
in base 256, which requires 100 bytes (plus some overhead because it's an 

>>> sys.getsizeof(256**100)

I suppose an implementation might choose to trade off memory for time, 
skipping string -> bignum conversations at the cost of doubling the 
memory requirements. But even if I grant you bignums, you have to do the 
same for floats. Re-implementing the entire floating point library is not 
a trivial task, especially if you want to support arbitrary precision 


More information about the Python-list mailing list