number accuracy, by Luke Kenneth Casson Leighton email@example.com
python classes that support number types have an automatic cast mechanism (hierarchical). unfortunately, the existing mechanism can only really cope with builtin types. any user-defined number types cannot be type-cast to or from any other user-defined number types because every single other class, including the builtin number types, need to know about every single other number type.
the proposal is therefore to provide a means by which the accuracy of numbers may be determined independently.
number ranking --------------
The Plan is to provide a function that returns a tuple or a list of tuples indicating the accuracy of the numerical class's resolution.
then, type-casting may occur by first checking the length of the list and further by checking the tuples in the lists.
the tuple consists of:
- 0 if the representation can do positive numbers, 1 if it can do positive and negative - the number of binary digits (ln2 of the max range) in the mantissa - optionally the number of digits in the exponent
def __accuracy__(self): return (1, 1000000000000000L) # presumed infinite!
def __accuracy__(self): return (1, 32, 12) # does up to 32-bit mantissa, 12-bit exponent
when complex numbers are involved, you want to do this:
return [(1, 32, 12), (1, 32, 12)]
which indicates that both the real and imaginary components of the 2-dimensional number space are capable of supporting floating point e.g. "0.95829-32.59j"
a simple test determines whether numbers can be typecast:
if num1.__accuracy__() > num2.__accuracy(): return num1 else: return num2
i believe... also... that... the same principle may be applied to actually performing a typecast...
def __mantissa__(self, dimension=0):
def __exponent__(self, dimension=0):
as long as these two functions return numbers of built-in types (int and long, basically) then it will be possible to type-cast between any user-defined numerical types.