Python syntax in Lisp and Scheme

Alex Martelli aleax at aleax.it
Mon Oct 6 13:12:18 EDT 2003


Erann Gat wrote:

> In article <uEwfb.222927$R32.7155971 at news2.tin.it>, aleax at aleax.it wrote:
> 
>> Erann Gat wrote:
>>    ...
>> > (That's why the xrange hack was invented.)
>> 
>> Almost right, except that xrange is a hack.
> 
> I presume you meant to say that xrange is *not* a hack.  Well, hackiness

Please don't put words in my mouth, thanks.  xrange _IS_ a hack, because
it was introduced in Python back in the dark ages before Python had the
iterator protocol.  If Python could be redesigned from scratch, without
needing to ensure backwards compatibility, the 'range' builtin (which 
might perhaps be better named 'irange', arguably) would no doubt be a
generator returning suitable iterators depending on its parameters.

[[ It is possible, though far from certain, that the next major release
of Python (3.0, presumably coming in a few years), which by definition of
"major" IS entitled to introduce some backwards incompatibilities, will
solve this legacy problem.  3.0's main theme will be to simplify Python
by removing redundancies, "more than one way to do it"'s, accreted over
the years; in my personal and humble opinion, range and xrange are just
such mtowdtis -- e.g. when one does need a list containing an arithmetic
progression, list(irange( ... -- the normal way to build a list from
any finite iterator, applied to a bounded arithmetic-progression
iterator -- is "the obvious way to do it", again IMPAHO. ]]

> is to some extent a matter of taste.  The only reason xrange exists is
> because range uses more memory than it needs to in the context where it is
> most often used.  IMO, any construct that exists solely to work around an
> inefficiency which should never have been there in the first place (as
> evidenced by the fact that Python is unique among programming langauges in
> having this particular inefficiency) is a hack.

We agree in broad terms on the definition of "hack" -- although any
discourse using "should" is obviously going to be debatable, e.g., I
consider cuts in Prolog, and strictness annotations in Haskell, as
generally being hacks, but (never having implemented compilers for
either) I can't discuss whether the inefficiencies they work around
"should" be there, or not.  Generally, whenever release N+1 of a
language adds a construct or concept that is really useful, one
could argue that the new addition "should" have been there before
(except, perhaps, in those exceedingly rare cases where the new
feechur deals with things that just didn't exist in the past).


>> Since in Python you cannot change the language to suit your whims
> 
> That seems to be the crux of the difference between Python and Lisp.  For

I agree with you on this point, too.

> some reason that I don't quite fathom, Pythonistas seem to think that this
> limitation in their language is actually a feature.  It boggles my mind.

Imagine a group of, say, a dozen programmers, working together by
typical Agile methods to develop a typical application program of
a few tens of thousands of function points -- developing about
100,000 new lines of delivered code plus about as much unit tests,
and reusing roughly the same amount of code from various libraries,
frameworks, packages and modules obtained from the net and/or from
commercial suppliers.  Nothing mind-boggling about this scenario,
surely -- it seems to describe a rather run-of-the-mill case.

Now, clearly, _uniformity_ in the code will be to the advantage
of the team and of the project it develops.  Extreme Programming
makes a Principle out of this (no "code ownership"), but even if
you don't rate it quite that highly, it's still clearly a good
thing.  Now, you can impose _some_ coding uniformity (within laxer
bounds set by the language) _for code originally developed by the
team itself_ by adopting and adhering to team-specific coding
guidelines; but when you're reusing code obtained from outside,
and need to adopt and maintain that code, the situation is harder.
Either having that code remain "alien", by allowing it to break
all of your coding guidelines; or "adopting" it thoroughly by,
in practice, rewriting it to fit your guidelines; is a serious
negative impact on the team's productivity.

In any case, to obtain any given level of uniformity, even just
in the code newly produced by the team, you need more and more
coding guidelines as the language becomes laxer.  For example:
in Ruby, class names MUST start with an uppercase letter, no
ifs, no buts; languages where such a language-imposed rule does
not exist (such as Python) may choose to adopt it as a coding
convention -- but it's one more atom of effort in writing those
conventions and in enforcing them.  The language that makes this
a rule makes life marginally _simpler_ for you; the language
that gives you more freedom makes life _harder_ for you.  I hope
that by choosing an example where Python is the "freer", and thus
LESS simple, language, I can show that this line of reasoning is
not a language-induced bias; I appreciate such uniformity, and I
think it simplifies the life of teams engaged in application
development, quite apart from Python vs other-languages issues.
(Of course, I could have chosen many examples where Python is
the more uniform/restrictive language: e.g., after a "def x()",
Python _mandates_ a colon, where Ruby 1.8 makes it _optional_ in
the same syntactic position -- here, Ruby is "freer", and thus
Python makes your life simpler).

The issue is by no means limited to lexical-level minutiae (though
in practice many people, and thus many teams, are quite prone to
waste inordinate amount of debate on exactly such minutiae -- one
of the "laws" first humorously identified by Cyril Northcote Parkinson
in his hilarious columns for "The Economist" later collected in book
form).  A language which allows dozens of forms of loops gives you
much more freedom of expression -- and thereby gives you headaches
when you try to write coding guidelines making the team's code more
uniform (and worse ones if you need to adopt and maintain reused
code of alien origin).  One that lets (e.g.) the definition of a
single module, or class, be spread over many separate code parts
over many files, again, gives you much more freedom -- and more
headaches -- than one which mandates classes and modules be
defined in textually-contiguous blocks of code.  Any increase in
freedom of expression is thus not an unalloyed good: it carries
costs as well as benefits.  Therefore, just like any other aspect
of an engineering design, it is subject to trade-offs.

It should be clear by now that the "quantum jump" (using this phrase
in the quaint popular sense of "huge leap", rather than in the correct
one of "tiny variation":-) in expressiveness that is afforded by a
powerful macro system, one letting you enrich and enhance, and thus
CHANGE, the very language you're using, can be viewed as TOO MUCH
(for the context of application development by middle-sized teams
reusing substantial amounts of alien code) without needing to boggle
anybody's mind.  You may perfectly well disagree with this and
counter-claim that having macros available will make everybody into
wonderful language designers: I will still prefer to stick to what
my experience has taught me, even within the context of my generally
optimistic stance on human nature, which is that it just ain't so.

It's optimistic enough to believe that average practitioners WILL be
able to design good interfaces (functions, procedures, classes and
hierarchies thereof, ...) suitable for the task at hand and with some
future potential for reuse in similar but not identical contexts; I
believe that there are plenty of problems even within these limited
confines, and giving more powerful tools to the same practitioners,
more degrees of freedom yet, is IMHO anything but conducive to optimal
performance in the areas I'm most interested in (chiefly application
development by mid-sized teams with very substantial reuse).

I hope this presents my opinions (shared by many, though far from all,
in the Python community) clearly enough that we can "agree to
disagree" without offending each other with such quips as "boggling
the mind".


>> Good summary: if you fancy yourself as a language designer, go for Lisp;
> 
> By using Lisp you *become* a language designer in the normal course of
> learning to program in it.  After a while you even become a good language
> designer.

Here is the crux of our disagreement.  If you believe everybody can
become a good language designer, I think the onus is on you to explain
why most languages are not designed well.  Do remember that a vast
majority of the people who do design languages HAVE had some -- in
certain cases, VAST -- experience with Lisp, e.g., check out the
roster of designers for the Java language.  My thesis is that the
ability to design languages well is far rarer than the ability to
design well within more restricted confines, and good design is taught
and learned much more easily in restricted realms, much less easily
the broader the degrees of freedom.


>> if you prefer to use a language designed by somebody else, without you
>> _or any of the dozens of people working with you on the same project_
>> being able to CHANGE the language, go for Python.
> 
> If you prefer to remain forever shackled by the limitations imposed by
> someone else, someone who may not have known what they were doing, or
> someone who made tacit assumptions that are not a good fit to your
> particular problem domain, then by all means go for Python.  (Oh, you also

If I thought Python's design was badly done, and a bad fit for my
problem domain, then, obviously, I would not have chosen Python (I
hardly lack vast experience in many other programming languages,
after all).  Isn't this totally obvious?

> have to be willing to give up all hope of ever having efficient
> native-code compilation.)

This assertion is false.  The psyco specializing-compiler already
shows what can be done in these terms within the restrictions of the
existing classic-python implementation.  The pypy project (among
whose participants Armin Rigo, the author of psyco, is counted) aims
(among other things) to apply the same techniques, without those very
confining restrictions, to provide "efficient native-code compilation"
to vastly greater extents.  Therefore, clearly, your assertion that
(to adopt Python) one has "to give up all hope" of such goals is not
at all well-founded.  There is nothing intrinsic to Python that can
justify it.  You may want to rephrase it in terms of having production
quality compilers to native code available _today_ -- but surely not in
terms of hopes, or in fact realistic possibilities, for the future.


>> > That's what macros are mainly good for, adding features to the langauge
>> > in
>> > ways that are absolutely impossible in any other language. 
>> > S-expression syntax is the feature that enables users to so this
>> > quickly and easily.
>> 
>> Doesn't Dylan do a pretty good job of giving essentially the same
>> semantics (including macros) without S-expression syntax?
> 
> Similar, but not quite the same.  But there are other reasons to use
> S-expressions besides macros.  For example, supposed you are writing code
> for a mission-critical application and you want to write a static analyzer
> to check that the code has a particular property.  That's vastly easier to
> do if your code is in S-expressions because you don't have to write a
> parser.

If your toolset includes a parser, you don't have to write one -- it's
there, ready for reuse.  The difficulty of writing a parser may give some
theoretical pause in a "greenfield development" idealized situation, but
given that parsers ARE in fact easily available it's not a compelling
argument in practice.  Still, we're not debating S-expressions, but,
rather, macros: from that POV, it seems to me that Dylan is quite a bit
more similar to Lisp than to Python -- even though, in terms of many
aspects of surface syntax, the reverse many appear true.  (Just to show
that I'm _NOT_ acritically accepting of anything Python and critical of
anything nonPython: I _do_ envy Dylan, and Lisp, the built-in generic
function / multimethod approach -- I think it's superior to the single
dispatch of Smalltalk / Ruby / Python / C++ / Java, with more advantages
than disadvantages, and am overjoyed that in pypy we have based the
whole architecture on a reimplementation of such multiple dispatch).


>> > For example, imagine you want to be able to traverse a binary tree and
>> > do
>> > an operation on all of its leaves.  In Lisp you can write a macro that
>> > lets you write:
>> > 
>> > (doleaves (leaf tree) ...)
>> > 
>> > You can't do that in Python (or any other langauge).
>> 
>> Well, in Ruby, or Smalltalk, you would pass your preferred code block
>> to the call to the doleaves iterator, giving something like:
>> 
>>     doleaves(tree) do |leaf|
>>         ...
>>     end
> 
> True.  For any single example (especially simple ones) I can give you can
> almost certainly find some language somewhere that can do that one thing
> with a specialized construct in that language.  The point is, macros let
> you do *all* these things with a single mechanism.

And nuclear warheads let you dispatch any enemy with a single kind of
weapon.  Despite which, some of us are QUITE happy that other weapon
systems still exist, and that our countries have abjured nukes...;-).

 
>> while in Python, where iterators are "the other way around" (they
>> get relevant items out rather than taking a code block in), it would be:
>> 
>>     for leaf in doleaves(tree):
>>         ...
> 
> Forcing you to either waste a lot of memory or write some very awkward
> code.

I _BEG_ your pardon...?  Assuming for definiteness that a tree is a
sequence of leaves and subtrees, and some predicate leafp tells me
whether something IS a leaf:

def doleaves(tree):
    for item in tree:
        if leafp(item):
                yield item
        else:
            for leaf in doleaves(item):
                yield leaf

where do I "waste a lot of memory"?  What is "very awkward" in
the above code?  I really don't understand.


> Here's another example: suppose you're writing embedded code and you need
> to write a critical section, that is, code that runs with no interrupts
> enabled.  In Lisp you can use macros to add a macro that lets you write:
> 
> (critical-section [code])
> 
> Getting this macro right is non-trivial because you have to make sure that
> it works properly if critical sections are nested, and if there are
> non-local exits from the code.

In Ruby and Smalltalk, you can clearly pass the code block to the critical-
section method just as you would pass it to any other iterator (i.e.,
same non-syntax-altering mechanism does cover this kind of needs just
as well as it covers looping).  In Python, the cultural preference is
for explicitness, thus try/finally (which IS designed specifically to
ensure handling of "nonlocal exits", and has no problem being nested)
enjoys strong preference over the "use of the same construct for widely
different purposes" which WOULD be currently allowed by iterators:

>>> class criticalsection(object):
...   def __init__(self): print 'Entering'
...   def __del__(self): print 'Exiting'
...   def __iter__(self): return self
...   def next(self): return None
...
>>> for x in criticalsection():
...   print "about to nonlocal-exit"
...   raise RuntimeError, "non-local exit right here"
...
Entering
about to nonlocal-exit
Exiting
Traceback (most recent call last):
  File "<stdin>", line 3, in ?
RuntimeError: non-local exit right here
>>>

Such reliance on the __del__ ("destructor") is not a well-received
idiom in Python, and the overall cultural preference is strongly for
NOT stretching a construct to perform widely different tasks (looping
vs before/after methods), even though technically it would be just
as feasible with Python's iterators as with Smalltalk's and Ruby's.

So, it IS quite possible that Python will grow a more specific way
to ensure the same semantics as try/finally in a more abstract way --
not because Python's iterators aren't technically capable of it, but
because using them that way would hit against such cultural issues
as the dislike for stretching a single tool to do different things.
(Clearly, macros would not help fight such a cultural attitude:-).


> Another example: suppose you want to write some code that insures that a
> particular condition is maintained while a code block executes.  In Lisp
> you can render that as:
> 
> (with-maintained-condition [condition] [code])
> 
> e.g.:
> 
> (with-maintained-condition (< minval (reactor-temp) maxval)
>   (start-reactor)
>   (operate-reactor)
>   (shutdown-reactor))
> 
> What's more, WITH-MAINTAINED-CONDITION can actually look at the code in
> the CONDITION part and analyze it to figure out how to take action to
> avoid having the condition violated, rather than just treating the
> condition as a predicate.

I have no idea of how with-maintained-condition would find and
examine each of the steps in the body in this example; isn't
the general issue quite equivalent to the halting problem, and
thus presumably insoluble?  If with-maintained-condition is, as
it would appear here, written by somebody who's not a chemical
engineer and has no notion about control of temperatures in
chemical reactors (or, other specialized engineers for completely
different types of reactors), HOW does it figure out (e.g.) the
physical model of the outside world that is presumably being
controlled here?  It seems to me that, compared to these huge
semantical issues, the minor ones connected to allowing such
"nifty syntax" pale into utter insignificance.


>> However, it appears to me that the focus on where variable names are
>> to be determined may be somewhat misplaced.
> 
> Yes, I'm only focusing on that because I wanted to come up with simple
> examples.  The problem is that the real power of macros can't really be
> conveyed with a simple example.  By definition, any simple macro can
> easily be replaced with simple code.  WITH-MAINTAINED-CONDITION starts to
> come closer to indicating what macros are capable of.

If your claim is that macros are only worthwhile for "artificial
intelligence" code that is able, by perusing other code, to infer
(and perhaps critique?) the physical world model it is trying to
control, and modify the other code accordingly, I will not dispute
that claim.  Should I ever go back to the field of Artificial
Intelligence (seems unlikely, as it's rather out of fashion right
now, but, who knows) I will probably ask you for more guidance
(the Prolog that I was using 15/20 years ago for the purpose was
clearly nowhere near up to it... it lacked macros...!-).  But as
long as my interests suggest _eschewing_ "self-modifying code" as
the plague, it seems to me I'm at least as well off w/o macros!-)


>> If you dream of there being "preferably only one obvious way to do it",
>> as Pythonistas do
> 
> So which is the one obvious way to do it, range or xrange?

An iterator.  Unfortunately, iterators did not exist when range and
xrange were invented, and thus the 'preferably' is violated.  By dint
of such issues of maintaining backwards compatibility, and not having
been born "perfect like Athena from Zeus's head" from day one, Python
is not perfect: it's just, among all imperfect languages, the one that,
in my judgment, best fits my current needs and interests.


>> (try "import this" at an interactive Python prompt),
> 
> Nice.  I particularly like the following:
> 
> "Errors should never pass silently."
> 
> and
> 
> "In the face of ambiguity, refuse the temptation to guess."
> 
> I predict that if Python is ever used in mission-critical applications

Hmmm, "if"?  Google for python "air traffic control", or python
success stories, depending on your definition of "mission-critical".
Either way, it IS being so used.

> that it is only a matter of time before a major disaster is caused by
> someone cutting and pasting this:
> 
> def foo():
>   if baz():
>     f()
>   g()
> 
> and getting this:
> 
> def foo():
>   if baz():
>     f()
>     g()

Ah, must be a mutation of the whitespace-eating nanovirus that was
identified and neutralized years ago -- a whitespace-*adding*
nanovirus, and moreover one which natural selection has carefully
honed to add just the right number of spaces (two, in this weird
indentation style).  I would estimate the chance of such a nanovirus
attacking as comparable to that of the attack of other mutant
strains, such as the dreaded balanced-parentheses-eaters which might 
prove particularly virulent in the rich culture-broth of an
S-expressions environment.  Fortunately, in either case, the rich
and stong suite of unit tests that surely accompanies such a
"mission-critical application" will easily detect the nefarious
deed and thus easily defang the nanoviruses' (nanovirii's? nanovirorum...?)
menace, even more easily than it detects and defangs the "type errors"
so utterly dreaded by all those who claim that only strictly statically
typed languages could ever possibly be any use in mission-critical
applications.


>> if you revel in the possibilities of there
>> being many ways to do it, even ones the language designer had never even
>> considered (or considered and rejected in disgust:-), macros then become
>> a huge plus.
> 
> If you are content to forever be a second-class citizen in the programming
> world, to blindly accept the judgements of the exalted language designers
> as if they are gospel, even when the language designers obviously don't
> know what they're doing as Guido clearly didn't in early version of Python
> as evidenced by the fact that proper lexical scoping wasn't added until
> version 2, even when the language designers come up with horrible messes
> like C++, if you are willing to take on faith that the language designers
> anticipated every need you might ever have in any programming domain you
> might ever choose to explore, then indeed macros are of no use to you.

Whoa there.  I detect in this tirade a crucial unspoken assumption: that
One Language is necessarily going to be all I ever learn, all I ever use,
for "any programming domain I might ever choose to explore".  This is,
most obviously, a patently silly crucial unspoken assumption, and this
obvious and patent silliness undermines the whole strength of the
argument, no matter how forcefully you choose to present it.

There being no open-source, generally useful operating system kernels in
any language but C, if one "programming domain I choose to explore" is to
modify, enrich and adapt the kernel of the operating system I'm using, with
the ambition of seeing my changes make it into the official release -- I
had better learn and use C pretty well, for example.  Does this mean, by
the (silly, unspoken) "one-language rule", that I am condemned to do _ALL_
of my programming in C...?  By no means!  I can, and do, learn and use more
than one programming language, depending on the context -- who must I be
cooperating with, in what "programming domain", on what sets of platforms,
and so on.  Since I know (and, at need, use) several different programming
languages, I have no need to BLINDLY accept anything whatsoever -- my
(metaphorical) eyes are quite open, and quite able to discern both the
overall picture, and the minutest details -- in point of fact far better
than my "actual" eyes are, since my actual, physical eyesight is far from
good.  With Python, I know by actual experience as well as supporting
reflection and analytical thought, I am quite able to cooperate fruitfully
in mid-sized teams of programmers of varying levels of ability working
rapidly and productively to implement application programs (and frameworks
therefor) of reasonable richness and complexity: the language's clarity,
simplicity, and uniformity (and the cultural biases reinforcing the same
underlying values) help quite powerfully in such collaboration.  If and
when specialized languages are opportune, they can be and are designed
separately (often subject to external constraints, e.g., XML for purposes
of cooperation with other -- separately designed and implemented -- "alien"
applications) and _implemented_ with Python.


> If on the other hand you dream of obtaining a deep understanding of what
> programming is and how languages work, if you dream of some day writing
> your own compilers rather than waiting for someone else to write them for

Beg pardon: I *HAVE* "written my own compilers", several times in the
course of my professional career.  I don't particularly "dream" of doing
it again, I just suspect it's quite possible that it may happen again,
e.g., next time somebody hires me to help write an application that
must suck in some alien-application-produced data (presumably in XML
with some given schema, these days) and produce something else as a
result.  I neither dread nor yearn for such occasions, any more than I
do wrt writing my own network protocols, GUI frameworks, device drivers,
schedulers, and so on -- all tasks I have had to perform, and which
(if I am somewhat unlucky, but within the reasonable span of possiiblities)
may well happen to fall on my shoulders again.  If I'm lucky, I will instead
find and be able to re-use *existing* device drivers, compilers, network
protocols, etc, etc -- I would feel luckier, then, because I could devote
more of my effort to building application programs that are going to be
directly useful to other human beings and thus enhance their lives.

I think I have a reasonably deep understanding of "what programming is" --
an activity performed by human beings, more often than not in actual or
virtual teams, and thus first and foremost an issue of collaboration
and cooperation.  "How languages work", from this POV, is first and
foremost by facilitating (or not...;-) the precise, unambiguous
communication that in turn underlies and supports the cooperation and
collaboration among these human beings.  Writing device drivers to
learn what hardware is and how interfaces to it work is one thing; in
most cases, if you find me writing a device driver it will be because,
after searching to the best of my ability, I have not located a device
driver I could simply and productively reuse (and couldn't "wait for
someone else to write" one for me).  And quite similarly, if you find
me writing a network protocol, a compiler, a GUI framework, etc, etc.
I'm not particularly _motivated_ in the abstract to such pursuits, and
to say I "dream" of spending my time building plumbing (i.e., building
infrastructure) would be laughable; I have often had to, and likely
will again in the future, e.g. because a wonderful new piece of HW I
really truly want to use doesn't come with a device driver for the
operating system I need to use it with, etc, etc.


> you, if you want to write code that trancends the ordinary and explores
> ideas and modes of thought that no one has explored before, and you want

Pure research?  Been there, done that (at IBM Research, in the 80's),
eventually moved to an application-development shop because I realized
that such "transcending and exploring" wasn't what I really wanted to
spend my whole life doing -- I wanted to make applications directly
useful to other human beings; I'm an engineer, not a pure scientist.
In any case, even in such pure research I didn't particularly miss
macros.  I was first hired by Texas Instruments in 1980 specifically
because of my knowledge of Lisp -- but was quite glad in 1981 to move
to IBM, and back to APL, which I had used at the start of my "tesi di
laurea" before being forced to recode it all in Fortran, Pascal and
assembly language on a then-newfangled machine called VAX-11 780.  For
the kind of purely numerical processing that constituted my "transcending
and exploring" back then, reusing the existing array-computation
infrastructure in APL was _so_ much better than having to roll my
own in Lisp -- let alone the half dozen different languages all called
"Lisp" (or in a couple of cases "Scheme") that different labs and factions
within labs inside TI had cobbled together with their own homebrew sets
of macros.  Factions and "NIH" are part of the way human beings are
made, of course, but at least, with APL, everybody WAS using the same
language -- the lack of macros constrained the amount of divergence --
so that collaborating with different groups, and sharing and reusing
code, was much more feasible that way.  (Of course, APL had its own
defects -- and how... -- but for the specific task of array computations
it was OK).  Of course, by that time it was already abundantly clear
to me that "horses for courses" was a much better idea in programming
than "one ring to bind them all".  Many people will never agree with
that (and thus almost all languages keep growing to try and be all
things to all people), but since my background (in theory) was mostly HW
(even though I kept having to do SW instead) the idea of having to use
one programming language for everything struck me as silly as having to
use one hardware tool for everything -- I'd rather have a toolbox with
a hammer for when I need to pound nails, AND a screwdriver for when I
need to drive screws, than a "superduper combined hammer+screwdriver+
scissors+pliers+soldering iron+..." which is apparently what most
programming languages aim to be...

> to do all this without having to overcome arbitrary obstacles placed in
> your way by people who think they know more than you do but really don't,

Been there, done that; I can easily say that most of the technologies
I've used in the course of about a quarter century do indeed respond quite
well to this description -- not just programming languages, mind you.  The
fundamental reason I've moved to using Python more than any other language,
these days, is that it doesn't.  It is not perfect -- but then, I have
never used any _perfect_ human-made artefact; it IS simple enough that I
can comfortably grasp it, explain it, understand its defects as well as
its strengths [and where both kinds of characteristics come from] and easily
see how to use it in reasonably good ways for tasks I care a lot about; it
promotes the cultural values that I see as most important for programming
collaboration in typical mid-sized teams -- simplicity, clarity, uniformity.

When it comes to programming language design, my experience learning, using
and teaching Python tells me one thing: Guido thinks he knows more than I
do about programming language design... and _he is right_, he does.  In an
egoless, objective mindset, I'm happier and more productive re-using the
fruits of his design, than I've ever been using languages of my own design
(and those designed by others yet).  I squabble with him as much as anybody,
mind you -- the flamewars you can find in the archives of python-dev and
the main Python list/newsgroup can only testify to a part of that, and
will never capture the expression on his face when he heard somebody else
at PythonUK presenting inter alia the Borg nonpattern I had designed (he
did not know that I, sitting next to him in the audience, was the designer,
so his disgusted diatribe was quite unconstrained -- it probably would have
been anyway, as he's quite an outspoken fellow:-), nor mine after my N-th
fruitless attempt to "sell" him on the "Protocol Adaptation metaprotocol",
the huge benefits of case-insensitivity, or some other of my
hobby-horses;-).  But, you know -- macros wouldn't help me on ANY of
these.  The PAm needs no syntax changes -- it's strictly a semantic issue
and I could easily implement it today (the problem is, due to what in
economics is called a "network effect", the PAm is worth little UNLESS
it's widely adopted, which means making it into the RELEASED language...).
As for changing a language from case-sensitive to case-insensitive, or
viceversa -- well, how WOULD you do it with macros -- without instantly
breaking a zillion lines of good, reusable existing code that depend on
the case-sensitivity rules defined in the official language definition?
And I care more about that wonderful panoply of reusable code, than I do
about making life easier for (e.g.) user of screen-reading software; so
I don't really think Python ever will or should become case-insensitive,
I only DREAM about it, to quote you (but my dream includes a time machine
to go back to 1990 and make it case-insensitive *from the start*, which
is about as likely as equipping it with iterators or PAm from then...:-).

> then macros - and Lisp macros in particular - are a huge win.

I think macros (Lisp ones in particular) are a huge win in situations
in which the ability to enrich / improve / change the language has more
advantages than disadvantages.  So, I think they would be a great fit
for languages which target just such situations, such as, definitely,
Perl, and perhaps also Ruby; and a net loss for languages which rely on
simplicity and uniformity, such as, definitely, Python.  If and when I
find myself tackling projects where macros "are a huge win", I may use a
Lisp of some sort, or Dylan, or maybe check out if Glasgow Haskell's
newest addition is usable within the limited confines of my brain -- I
just dearly and earnestly hope that Python remains true to its own
self -- simple, clear, uniform -- and NOT grow any macros itself...!!!


Alex





More information about the Python-list mailing list