Typing system vs. Java
aleaxit at yahoo.com
Fri Aug 3 14:51:01 CEST 2001
"Michael Abbott" <michael at rcp.co.uk> wrote in message
news:Xns90F26B7747E94michaelrcpcouk at 220.127.116.11...
> > I suspect we may be talking at cross-purposes -- I don't request
> > a language that will in fact absolutely CHECK assertions containing
> > universal or existential quantifiers, nor one that will always be
> > able to take advantage of them to further optimization or other
> > type-inference goals: just one that will let me *express* such
> > assertions simply, which, per se, isn't all that hard.
> Um. But I don't see the point. If the compilers going to ignore
> everything you write, then why not just put your assertions into comments?
> It's what we all have to do already!
When expressing design intentions in a formal language, I get
a *CHANCE* that they will be checked, at least partially, or
used for optimization purposes and other inferences. It's the
same difference there is between
# here x will always be greater than y
the latter is a formal way to express the former, which makes it:
-- totally unambiguous (comments are in natural language,
thus ambiguous more often than not)
-- potentially checkable -- some compiler might turn this into
if not (x>y): raise AssertionError
at least with the right flags
-- potentially usable for inferences -- again with the right
flags a compiler MIGHT be able to use this alleged fact
to get higher-performing code (e.g. eliminate a following
"if x<y" block)
The first point always holds and it's a key one. Second and third
are just possibilities (with current Python technology, for assert,
the second does hold, the third one doesn't), but the point is:
once the construct is in the language, and therefore in those
application programs that might live for the next 20 or 40 years,
compiler technology may improve and actually take advantage of
what today are just 'potentials', aka possibilities.
So I want to be able to write, say (I'm not hung on syntax):
def sort(sequence, predicate):
forany x: not predicate(x,x)
forany x,y: truth(predicate(x,y)) != truth(predicate(y,x))
or whatever -- whether a current compiler is able to insert SOME
partial checks (compile- or run-time), or take some inferences from
these formal assertions, is of SECONDARY importance. They still
stand as formal-language, potentially-(partially)-checkable,
potentially-usable-for-optimization, unambiguous expressions
of my design intent, so their existence enhances the value of
the program containing them. And if they didn't add substantial
complication, they'd be worth having in the language.
> I always thought that compiler register allocation made the register
> keywork pointless. Surely using this keyword is just noise now.
It doesn't tell the reader of the program much that is useful, no.
> Seems to me that there'd only be any point in putting such hints into your
> program if there was a chance that they'd have some concrete effect.
> Otherwise you might as well just write good comments, surely?
See it the other way 'round: you might as well write things out
in formalized language -- they'll be at least as good as the best
comments you could possibly write (no ambiguity risk), PLUS,
if an application lives for 40 years who can rule out a future
compiler may be able to make use of them?
> Interesting idea. So, our "high level abstract assertions" can be used in
> several ways:
> 1. To supplement comments
> 2. As a pious hope that the complier will check and validate them (fat
I think you mean 'fat chance' _at compile-time_. Well, maybe.
> 3. As run time debug consistency checks
> 4. To generate automated thorough tests
> 5. As a stepping stone to the future
> Hmm. Well, for 1 I think we should write comments,
I think otherwise, because natural language is inherently ambiguous.
> > greater than either of their neighbors" -- now *THAT* is
> > what I'd call "typing".
> But what do you gain by writing all this in a formal language that's going
> to be quietly ignored by all your tools? Why not just write good
Because comments are in natural language and natural language
is inherently ambiguous. Sorry for repeating this answer, but
then you keep repeating the question:-).
> Oh, that's unfair! After all, there is always a trade-off in type
> between the expressive power of the language being checked and how much
> checking is actually performed.
I think the sweet spot of the tradeoff (for all purposes except
perhaps code speed, and Self, stalin &c make that disputable)
is a VERY expressive language -- one which can express a lot,
including stuff for which the checking is not normally performed.
The only trade-off that's embarassing is the one that comes if
one takes for granted that the compiler MUST make use NOW of
EVERYTHING it lets you express -- then, it had better be darn
selective on what it lets you express. But the 'taking for
granted' is IMHO totally unwarranted.
More information about the Python-list