Is this a true statement?
Brianmce at bigfoot.com
Tue Jun 26 12:40:02 CEST 2001
On 25 Jun 2001 20:40:02 GMT, Tim Daneliuk <tundra at tundraware.com>
>Rainer Deyke wrote:
>> > I still do not understand/agree with this line of reasoning entirely.
>> Your reasoning is flawed.
>Perhaps, but thus far, I've only seen assertion to that effect, not proof.
>> Interaction with the outside world is, by definition, *not* algorithmic.
>Nope. All computational activity is algorithmic. The thing
>that *causes* the input to be what it is may- or may not be
>guided by algorithm but insofar as a program sees input, it is
>algorithmic by nature. "Input" can be subtle. It can be
>a numeric value, a keystroke, the closing of a switch in a
>mechanical device, or simply a measure of time, but all these
>are meaningless (in our context) without algorithmic processing.
Output is not the same as computational ability. The interfaces with
which a program can output and interact with its environment have no
effect on its computational power. Postscript is turing complete, but
can't even access files. Java is turing complete, and remains so when
you limit what it can access through its security model.
>> Consider this diagram of a typical computer program:
>> Input -> Computation -> Output
>> Only the "computation" part is algorithmic, and those who use the term
>> "Turing complete" generally only concern themselves with the computation
>Nope. Formally, an "algorithm" is something which when presented with
>a given set of inputs will always produce the same output. (This, BTW,
>is why Genetic Algorithms are a contentious case because given a particular
>input, a GA will NOT give the same results repeatably.) So, by the
>very definition of what an algorithm is and does, it embraces the notion
>of both input and output.
No - the input to the genetic algorithm is the entire input since the
start of the program. That its performance has changed after some
input is irrelevant - if you go back to the same initial state and
give it the same input, it will act exactly the same (It can't do
anything else - after all it is running on a deterministic finite
>> All computer programs are deterministic. Any computer program that
>Nope. Not even close to true. The entire family of NP-Complete algorithms
>(Traveling Salesman, Knapsack Packing, et al) are NON-deterministic (that's
>what the 'N' stands for.
This is incorrect. There is no such thing as an NP-Complete
algorithm, you mean NPC _problems_. This simply means that there is
no (currently known) deterministic algorithm that can solve the
problem in polynomial time.
The non-deterministic part refers to the fact that it _is_ possible to
check whether a proposed solution is in fact correct, and so if you
have a non-deterministic step that 'guesses' a solution, or you check
all possible solutions in paralell, then you can potentially solve it
in polynomial time.
The checking algorithm is still deterministic. The NP part only
refers to the time complexity of the problem, not the algorithm. The
problems are completely solvable with normal algorithms - they just
take a long time.
>> > 2) Have Python write out a set of machine code bits which
>> > are understood by the underlying implementation and
>> > hardware to be the correct handling of an interrupt.
Actually, I don't think this is a valid solution to the problem. If
you define "writing a device driver in python" as equivalent to
"writing a program in python that writes a device driver", then you
violate the "purely in python" bit, because you're also writing it in
C ( because the python interpreter is written in C, and it
wrote/interprets a program (with your python program as input) that
wrote the program that generated the device driver ;-)
>> Only if the system has a writeable disk with sufficient empty space. This
>> is not a given.
>Well no it's not, but this is irrelevant to the discussion. Clearly, some
>implementations of any algorithm are more time/space efficient - Bubble
>Sorts run in O(n^2) but Heap Sorts run in O(n log n) yet both are still
>just sorting. The limitations of disk, memory, and CPU merely define
>how big a problem we can solve, or how fast we can solve it, not *how many*
>different kinds of problems we can solve, which is the debate here.
This is really a nitpick, but limitations of memory etc do define what
type of problems we can solve. You can't do anything on a machine
with no memory. The reason is that computers aren't really turing
machines, they are finite state machines. For a reasonable amount of
memory they do approximate turing completeness for most problems.
>For example, a machine with limited disk space can solve the same *set* of
>problems as a machine with Terabytes of disk, just much slower and less
>efficiently. The minimums necessary to compute everything which is
>computable were defined initially by Turing and later by Bohm and
>Jacopini and in no case was disk space, memory space, or CPU speed
>a foundational condition. As I said, these real-world constraints
>speak to how fast or how efficiently we can run a given algorithm,
>not how many *different* kinds of algorithms we can do.
Turing does state that a certain amount of memory is needed to compute
anything. A turing machine has to have infinite memory in some form
(eg. the standard infinite tape.) There are some problems that can't
be solved with a mere finite number of states.
>> > 3) Add a native interrupt object to the language.
>> > 1) clearly takes us out of "pure Python" and is thus of no further
>> > interest (unless you actually want to *solve* this problem in the
>> > real world, in which case this is probably the best solution ;)))
>> > 2) Works just fine, but is kind of clumsy and unnatural to use.
>> > Note, however, that it is 2) that gives us the ability to do *anything*
>> > in Python which we can "do" in any other Turing-Complete language.
>> > 3) is where the rocks start to get thrown. If I do 3), have I "extended"
>> > the language to do something it could not previously? NO! I have
>> > implemented what I could already do with 2) in a more programmer-friendly
>> > manner, but I have not changed the net computational power of the
But I don't think you can define the language as "anything it might do
at some future point in time, after I've added something."
If I add the complete possible C interfaces to postscript, I don't
think I can call it postscript anymore, (unless I want a lawsuit
whenever someone views a document that formats his hard drive).
If by "Python" you are referring to all possible future
implementations of the language, then this is true, but you can still
say "There are things possible in <foo> implementation of C, that are
not possible with the standard distribution of python 2.1"
>> "Net computational power" is irrelevant in this context.
>If you say so. I thought this was the exact point of the debate: Can C++
>and Python "do" the same things. i.e., Do they possess equivalent abilities
>to compute the same set of problems - Computational Power.
They do posess equivalent computational power, but they can't do the
same things, because C defines certain interfaces that python
More information about the Python-list