How about adding rational fraction to Python?

Paul Rubin http
Mon Mar 3 17:39:52 CET 2008

Jeff Schwab <jeff at> writes:
> > User defined types in python are fairly heavyweight compared with the
> > built-in types,
> Yet they continue to form the basis of almost all non-trivial Python
> programs.  Anyway, it's a bit soon to be optimizing. :)

Large python programs usually have some classes for complex data
structures, but it's not typical Pythonic practice to define new
classes for things as small as integers.

> > and a type like that is just another thing for the user to have to
> > remember.
> How so?  A well-written function generally shouldn't depending on the
> exact types of its arguments, anyway.  

By "another thing to remember" I mean doing the right thing should
happen with the normal integers that result from writing literals like
1 and 2, without resorting to a nonstandard user defined type.  

> If someone has written a
> function to find (e.g.) the median of a collection of numbers, their
> code should already be prepared to accept values of user-defined
> numeric types.

It's important to be able to write such generic or polymorphic
functions, but most typical functions are monomorphic.

> If I want to call such a function with my hand-rolled
> DivisionSafeInteger type, it should just work,

Sure, however, the Pythonic approach is to make the defaults do the
right thing without requiring such user-written workarounds.  Of course
there is occasional unclarity about what the right thing is.

> > file) and size_t, so if you pass an off_t to a function that expects a
> > size_t as that arg, the compiler notices the error.
> On what compiler?  I've never seen a C compiler that would mind any
> kind of calculation involving two native, unsigned types.

You are right, C is even worse than I remembered.

> > But they are
> > really just integers and they compile with no runtime overhead.
> They do indeed have run-time overhead, as opposed to (e.g.) meta-types
> whose operations are performed at compile-time. 

Not sure what you mean; by "no runtime overhead" I just mean they
compile to the same code as regular ints, no runtime checks.  OK, it
turns out that for all intents and purposes it looks like they ARE
regular ints even at compile time, but in other languages it's not
like that.

> If you mean they have less overhead than types whose operations
> perform run-time checks, then yes, of course that's true.  You
> specifically stated (then snipped) that you "would be happier if
> int/int always threw an error."  The beauty of a language with such
> extensive support for user-defined types that can be used like
> built-in type is that you are free to define types that meet your
> needs. 

But those are not ints then.  We're discussing an issue of language
design, which is what the behavior of the ordinary, standard, default
ints should be.  My reason for suggesting int/int->error is that I
think it would increase program reliability in general.  But that is
only if it applies to all the ints by default, with int/int=float
being a possible result of a nonstandard user-defined type.  From the
zen list: "In the face of ambiguity, refuse the temptation to guess."

> My understanding is that Python will easily support lots of different
> types of just about anything.  That's the point.

No I don't think so.  Also from the zen list: "There should be one--
and preferably only one --obvious way to do it."

> > There's an interesting talk linked from LTU about future languages:
> >
> Thanks, but that just seems to have links to the slides.  Is there a
> written article, or a video of Mr. Sweeney's talk?

I don't think there's an article.  There might be video somewhere.  I
thought the slides were enough to get the ideas across so I didn't
have much interest in sitting through a video.

More information about the Python-list mailing list