Is this a true statement?
tundra at tundraware.com
Mon Jun 25 20:54:02 CEST 2001
Rainer Deyke wrote:
> "Steven D. Majewski" <sdm7g at Virginia.EDU> wrote in message
> news:mailman.993411997.2625.python-list at python.org...
> > I don't retract the statement that the fact that you can compute
> > any computable function in any Turing complete does not mean that
> > you can *DO* the same thing in any language.
> I agree here.
> To a program, "doing something" means "interacting with the outer world".
> It is a given that, if a program can receive the proper input from the
> world, it can calculate (but not necessarily perform) the proper output.
> Python includes many ways of interacting with the outer world in its
> standard library, but C++ usually includes more. Python can sometimes get
> around this by outputting machine code, but only if the problem is defined
> in a way that allows this.
> To a programmer, however, "doing something" generally means structuring code
> in a certain way. Programming is about putting all information in only one
> place. Variables are one tool for doing this:
> From this perspective, there are a couple of areas where C++ is more
> powerful, but they are eclipsed by the many areas where Python is more
[Diving back in at great personal risk...]
I still do not understand/agree with this line of reasoning entirely.
As I mentioned in a previous post, one has to define what we mean by
"do" or "program". It is a fundamental truth of Computer Science that
all Turing-Complete languages have *exactly* the same computational
abilities - i.e., They all handle the exact same set of algorithms -
in particular, those algorithms which are Turing Computable (which
excludes things like the Halting Problem). This *includes* all
"interactions with the outside world" because such interaction
is, by definition, algorithmic.
(Fine Point: There is a debatable point here about whether adaptive
algorithms like those found in Genetic Programming are really formally
"algorithms" at all because they do not behave consistently when faced with
he same input conditions, but that's way outside the scope of this discussion.)
Now, if we say "Language X can "do" things we cannot "do" in Language Y" we
are saying one of several possible things:
1) Language X embraces problems greater than just the Turing-Complete
set (Impossible - If anyone disagrees, please send me your solution
to the Halting Problem and we'll both be famous ;)))
2) Language Y is not Turing-Complete (possible, but not in the class of
languages under discussion in this thread).
3) There is something about how X and Y are *implemented* which causes
this disparity (almost certainly the case). Most likely, Y is
implemented in a way which restricts its use somehow. Note that
the *language* itself is not restricted - it still accepts the entire
set of Turing-Complete problems - its just crippled *in practice*
because of its implementation. It's like putting a 400hp Porsche
engine on a motorcyle frame - the limitation is not the engine, but
rather the *context* in which the engine is expected to operate.
(Now watch, someone will chime in that they have a 911 Turbo
powerplant mounted on their BMW R100 frame ;) N.B. These
"limitations" are most usually *intentional* on the language
or systems designer's part to promote better coding style, create
a more effective working paradigm, or preserve overall system
security and sanity. "Limitations" in this context are a *good*
thing. For instance, we don't let the arbitary programmer on a
Unix system read and write kernel memory in their C programs
*even though the language can do it because we want a sane and
stable OS runtime environment.
Now, back to the debate at hand. We've all agreed that some things
are easier to do in C++ (yuck) than Python. Let's take the example
of writing device drivers. Using today's language and syntax to
handle interrupts in Python, you realistically have only a few
1) Write a library in another language that implements
interrupt handling and then give it a Python interface.
2) Have Python write out a set of machine code bits which
are understood by the underlying implementation and
hardware to be the correct handling of an interrupt.
3) Add a native interrupt object to the language.
1) clearly takes us out of "pure Python" and is thus of no further
interest (unless you actually want to *solve* this problem in the
real world, in which case this is probably the best solution ;)))
2) Works just fine, but is kind of clumsy and unnatural to use.
Note, however, that it is 2) that gives us the ability to do *anything*
in Python which we can "do" in any other Turing-Complete language.
3) is where the rocks start to get thrown. If I do 3), have I "extended"
the language to do something it could not previously? NO! I have
implemented what I could already do with 2) in a more programmer-friendly
manner, but I have not changed the net computational power of the language.
So, I close with this claim (which I believe is well supported by
the theory): Define "Computational Power" to be the number of
algorithms a given language can compute. Then: Increasing
"Computational Power" for a given language is only possible if
that language is less than Turing-Complete. In every other case,
all we are doing is changing the *programmer's* perception and
paradigm of work, not the net "Computational Power" of the language.
tundra at tundraware.com
More information about the Python-list