Does Python really follow its philosophy of "Readability counts"?

Michele Simionato michele.simionato at gmail.com
Thu Jan 15 06:34:11 CET 2009


On Jan 14, 8:16 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> I have a situation which I face almost every day, where I have some
> gigabytes of data that I want to slice and dice somehow and get some
> numbers out of.  I spend 15 minutes writing a one-off Python program
> and then several hours waiting for it to run.
> Often, the Python program crashes halfway through, even though I
> tested it on a few megabytes of data before starting the full
> multi-gigabyte run, because it hit some unexpected condition in the
> data that could have been prevented with more compile time checking
> that made sure the structures understood by the one-off script matched
> the ones in the program that generated the input data.

I know the feeling. The worse thing is when you have a stupid typo
(which would be immediately caught by a compiler) which affects a
section of the code that runs after 8 hours of computations. Pylint
helps - when you remember torun it on all your code base - but it
is definitely not the same as a compiler.

> I would be ecstatic with a version of Python where I might have to
> spend 20 minutes instead of 15 minutes writing the program, but then
> it runs in half an hour instead of several hours and doesn't crash.  I
> think the Python community should be aiming towards this.

Dunno. Python has been designed from the start to be fully
dynamic and with no type checks, I don't think it is really
possible to change that. On the other hand, one could envision
a Python-like language with more type safety (there are already
a few experiments in that direction).
Personally, I would be glad to trade some flexibility for additional
safety; if you we get more speed as an additional bonus, that's fine
too.



More information about the Python-list mailing list