Is this a true statement?

Tim Daneliuk tundra at tundraware.com
Mon Jun 25 16:40:02 EDT 2001


Rainer Deyke wrote:
> 
<SNIP>

> > I still do not understand/agree with this line of reasoning entirely.
> 
> Your reasoning is flawed.

Perhaps, but thus far, I've only seen assertion to that effect, not proof.

<SNIP>

> 
> Interaction with the outside world is, by definition, *not* algorithmic.


Nope.  All computational activity is algorithmic.  The thing
that *causes* the input to be what it is may- or may not be
guided by algorithm but insofar as a program sees input, it is
algorithmic by nature.  "Input" can be subtle.  It can be
a numeric value, a keystroke, the closing of a switch in a
mechanical device, or simply a measure of time, but all these
are meaningless (in our context) without algorithmic processing.

 
> Consider this diagram of a typical computer program:
> 
> Input -> Computation -> Output
> 
> Only the "computation" part is algorithmic, and those who use the term
> "Turing complete" generally only concern themselves with the computation

Nope.  Formally, an "algorithm" is something which when presented with
a given set of inputs will always produce the same output.  (This, BTW,
is why Genetic Algorithms are a contentious case because given a particular
input, a GA will NOT give the same results repeatably.)  So, by the
very definition of what an algorithm is and does, it embraces the notion
of both input and output.

> part.  As I understand it, a Turing machine has the ability to input and
> output single characters/numbers through a single input stream and a single

Nope. This is only one kind of Turing Machine which is often used as
a teaching tool. A Turing Machine may have as many inputs and outputs
as you like, as many symbol types as you like, and move in as many
directions as you like.  It turns out, however, that *all* TMs are reducible
to a 1-tape, two-symbol TM (a very deep theorem of Computer Science in its
own right) and that is why this type of TM is typically taught first.

> output stream.  A Turing machine does *not* have the ability to display
> graphics of any kind.  There is the possibility that another machine turns
> the output of the Turing machine into graphics, but then it is that machine,
> not the Turing machine, which is generating the graphics.

Nope. *All* computation (including graphics generation) is reducable to a
Turing Machine.  In your example, the machine that turns things into
graphics is itself a TM (if it is a digital computer) or it is the end
analog device which is irrelevant to this entire discussion.  The computer
that generates graphics on my video card is a TM.  The math co-processor
that does floating point is a TM.  My word processing program is
a TM.  When I follow a set of directions for assembling something in
a consistent and repeatable way, I am acting as a TM.  ALL computation
can be reduced to a TM - this is an essential theorem of Computer Science.

> 
> > (Fine Point:  There is a debatable point here about whether adaptive
> > algorithms like those found in Genetic Programming are really formally
> > "algorithms" at all because they do not behave consistently when faced
> with
> > he same input conditions, but that's way outside the scope of this
> discussion.)
> 
> All computer programs are deterministic.  Any computer program that

Nope. Not even close to true.  The entire family of NP-Complete algorithms
(Traveling Salesman, Knapsack Packing, et al) are NON-deterministic (that's
what the 'N' stands for.

> generates output O from input I will always generate output O from input I.
> 

This is true for all things which are formally "algorithms".  But this
is not the same thing exactly as "deterministic".  The Traveling Salesman
problem meets your I/O criteria, but as I said above, it is a Non-Deterministic
algorithm.  See almost any book on algorithm complexity for an explanation
of why this is the case.

> > Now, back to the debate at hand.  We've all agreed that some things
> > are easier to do in C++ (yuck) than Python.  Let's take the example
> > of writing device drivers.  Using today's language and syntax to
> > handle interrupts in Python, you realistically have only a few
> > choices:
> >
> > 1) Write a library in another language that implements
> >    interrupt handling and then give it a Python interface.
> >
> > 2) Have Python write out a set of machine code bits which
> >    are understood by the underlying implementation and
> >    hardware to be the correct handling of an interrupt.
> 
> Only if the system has a writeable disk with sufficient empty space.  This
> is not a given.

Well no it's not, but this is irrelevant to the discussion.  Clearly, some
implementations of any algorithm are more time/space efficient - Bubble
Sorts run in O(n^2) but Heap Sorts run in O(n log n) yet both are still
just sorting.  The limitations of disk, memory, and CPU merely define
how big a problem we can solve, or how fast we can solve it, not *how many* 
different kinds of problems we can solve, which is the debate here.

For example, a machine with limited disk space can solve the same *set* of
problems as a machine with Terabytes of disk, just much slower and less
efficiently.  The minimums necessary to compute everything which is 
computable were defined initially by Turing and later by Bohm and
Jacopini and in no case was disk space, memory space, or CPU speed
a foundational condition.  As I said, these real-world constraints
speak to how fast or how efficiently we can run a given algorithm,
not how many *different* kinds of algorithms we can do.

> 
> > 3) Add a native interrupt object to the language.
> >
> > 1) clearly takes us out of "pure Python" and is thus of no further
> > interest (unless you actually want to *solve* this problem in the
> > real world, in which case this is probably the best solution ;)))
> >
> > 2) Works just fine, but is kind of clumsy and unnatural to use.
> > Note, however, that it is 2) that gives us the ability to do *anything*
> > in Python which we can "do" in any other Turing-Complete language.
> >
> > 3) is where the rocks start to get thrown.  If I do 3), have I "extended"
> > the language to do something it could not previously?  NO!  I have
> > implemented what I could already do with 2) in a more programmer-friendly
> > manner, but I have not changed the net computational power of the
> language.
> 
> "Net computational power" is irrelevant in this context.

If you say so.  I thought this was the exact point of the debate: Can C++
and Python "do" the same things.  i.e., Do they possess equivalent abilities
to compute the same set of problems - Computational Power.


-- 
------------------------------------------------------------------------------
Tim Daneliuk
tundra at tundraware.com



More information about the Python-list mailing list