Alexander Belopolsky wrote:
On Mon, Nov 29, 2010 at 2:22 AM, "Martin v. Löwis" email@example.com wrote:
The former ensures that literals in code are always readable; the later allows users to enter numbers in their own number system. How could that be a bad thing?
It's YAGNI, feature bloat. It gives the illusion of supporting something that actually isn't supported very well (namely, parsing local number strings). I claim that there is no meaningful application of this feature.
This is not about parsing local number strings, it's about parsing number strings represented using different scripts - besides en-US is a locale as well, ye know :-)
Speaking of YAGNI, does anyone want to defend
Yes. The same arguments apply.
Just because ASCII-proponents may have a hard time reading such literals, doesn't mean that script users have the same trouble.
Especially given that we reject complex('1234.56i'):
We've had that discussion long before we had Unicode in Python. The main reason was that 'i' looked to similar to 1 in a number of fonts which is why it was rejected for Python source code.
However, I don't any reason why we shouldn't accept both i and j for complex(), though, since the input to that constructor doesn't have to originate in Python source code.