On Sat, Sep 18, 2021 at 11:24:40AM +0900, Stephen J. Turnbull wrote:
Steven D'Aprano writes:
But I don't think it would be a big problem unless the caller was mixing calls to gamma with int and float arguments.
You mean `factorial` here, right? `gamma` coerces int to float before evaluating, doesn't it?
Right, yes, sorry for the confusion.
If you stick to one or the other, it wouldn't matter.
Users who are disciplined enough to stick to one or the other will use factorial when appropriate, and gamma when that's appropriate. The point of the proposal is to allow the less pedantic to not worry about the difference, and just use `factorial`.
Indeed, and that's one of the problems with the proposal. The batteries in my HP-48GX are flat so I can't see what it does, but the HP-39G-II has a factorial function that computes the gamma function: 5.5! --> returns 287.885277815 but even for integer arguments, it always returns a float. (The calculator merely displays floats as if they were exact integers if they are small enough.) So for sufficiently large input, n! on the calculator is already going to be rounded to whatever float precision the calculator provides. 18! --> 6402373705730000 19! --> 1.21645100409E17
Or if we had automatic simple type dispatch, we could define: [...] and nobody would care that the two factorial functions had different performance and precision characteristics.
Except that it would still be the case that
factorial(23) == factorial(23.0) False
Sure, but if simple type dispatch (generic functions) was built into the language, people would be perfectly comfortable with the idea that two functions with the same name but accepting different types are different functions that might return different values. It only seems weird because we've forgotten the Python 2.x days: >>> 11.0/2 == 11/2 False I acknowledge that those who have not yet learned that floats are not real numbers and don't have infinite precision, and hence are surprised that sqrt(3)**2 != 3, will be surprised by this as well. But let's be honest, people who expect floating point maths to be identical to pure mathematics are surprised by all sorts of things. Python is not Scratch, our audience is not intended to be only the unsophisticated and unlearned newbie casual programmer. Anyway, I agree that trying to fit gamma into factorial would not be a great fit for the language as it stands. The benefit is just too little for the complexity it would add. -- Steve