AI and cognitive psychology rant (getting more and more OT - tell me if I should shut up)

Stephen Horne steve at
Mon Nov 3 02:48:23 CET 2003

On 3 Nov 2003 00:13:58 GMT, bokr at (Bengt Richter) wrote:

>On 1 Nov 2003 22:19:11 -0800, mis6 at (Michele Simionato) wrote:
>>Stephen Horne <steve at> wrote in message news:<uhe2qv0gff8v17trs4cj6mg88t2o7smq9b at>...
>>> The evidence suggests that conscious minds exist
>>> within the universe as an arrangement of matter subject to the same
>>> laws as any other arrangement of matter.

>If there is some "stuff" whose state can eventually be shown to have 1:1 relationship
>with the state of a particular individual's conscious experience

No-one has ever shown that. Actually, exactly the opposite. Our
consciousness is a very poor substitute for reality. You may see you
conscious awareness as detailed, but that is only because as soon as
you shift you focus on some detail it naturally enters the
consciousness (indeed it often gets backdated, so that you think it
was in your consciousness at a time that it simply wasn't there at

What would be a particularly valued aspect of consciousness? How about
'agency' - the sense of owning and controlling our own actions - the
sense of free will?

Well, if electrodes are placed on your brain in the right place, they
can directly move your arm. So what? Well, you will be completely
unaware of the remote control - unless you are told about it, you will
claim that you chose to move your arm of your own free will. You will
even have an excuse for why you moved your arm which you believe

In fact you don't even need electrodes on the brain - the same effect
can be seen with people whose left and right brains are separated
(corpus callosum cut). Hold up a card saying 'get a drink' so that it
is visible only to one eye and they will go to get a drink. Hold up a
card saying 'why did you get a drink?' to the other eye and they will
show no awareness of the first card, insisting they just felt thirsty
or whatever.

Quite simply, consciousness is nothing special. It is a product of
information processing in the brain. Sometimes that information
processing goes wrong for some reason or another, and consciousness
gets distorted as a result.

The 'transducers' are our senses, providing information about reality
(imperfectly) to our brains.

>From the fact that my consciousness goes out like a light almost every night, I speculate that
>that what persists day to day (brains, synaptic connections, proteins, etc) is not the _essential_
>basis of my experience, but rather, those persistent things somehow _shape_ the medium through
>whose state-changes my conscious experience arises.

What if the thing that woke up the next day was a perfect copy of you,
complete with the same memories, rather like Arnie in the Sixth Day?

No - not a clone. A clone is at best an identical twin with a
different age as well as different memories, and identical twins do
not have identical brains even at birth - there isn't enough
information in our DNA to give an exact blueprint for the initial
connections between the neurons in our brains.

But assume a perfect copy of a person, complete with memories, could
be made. How would it know that it wasn't the same self that it
remembered from yesterday?

Now consider this...

The brain really does change during sleep. In a way, you literally are
not the same person when you wake up as when you went to sleep.

More worryingly, the continuity of consciousness even when awake is
itself an illusion. Think of the fridge light that turns off when the
fridge is shut - if you didn't know about fridge lights, and could
only see it when the door is open, you would assume the light was
always on. Similarly, whenever you try to observe your state of
consciousness it is inherently on so it apears to be always on and
continuous, but science strongly suggests that this appearance is
simply wrong.

So do we have any more claim to our conscious sense of self than this
hypothetical copy would have?

The fact is that no-one has shown me anything to make me believe that
we have 'experience' separate from the information processing capacity
of the brain. So far as I can see, the copy would have as much claim
to the conscious sense of self, 'continuing on' from prior memory, as
the original.

>Now consider the experience of being "convinced" that a theory "is true." What does that mean?

Science suggests that all subjective meaning is linked to either
direct senses or action potentials in the brain. If you think of the
concept 'democracy', for instance, you may actually generate the
action potentials for raising your hand to vote (depending on your
particular subjective understanding of that abstract term) - though
those potentials get instantly suppressed.

In fact a key language area (Brocas area IIRC) is also strongly linked
to gesture - hence sign language, I suppose.

This makes good sense. Evolution always works by modifying and
extending what it already has. The mental 'vocabulary' has always been
in terms of the bodily inputs and outputs, so as the capacity for more
abstract thought evolved it would of course build upon the existing
body-based 'vocabulary' foundations.

I can easily suggest possible associations for the term 'convinced' by
referring to a thesaurus - 'unshakeable', for instance, is a clear
body/motion related metaphor.

Or maybe it relates to the body language action potentials associated
with the appearance of being convinced?

At which point I'm suddenly having an a-ha moment - maybe the verbal
and nonverbal communication deficits in Asperger syndrome and autism
are strongly linked. Maybe a person who is unable to associate an idea
to its body language, for instance, loses a lot of the intuitive sense
of that idea. Thus the idea must be learned verbally and the verbal
definition inherently gets taken too literally.

Basically, if a person has to give the concept a new, abstract,
internal symbol instead of using the normal body-language associated
internal symbol, then any innate intuitions relating to that concept
will be lost. The neurological implementation of the intuitions may
exist but never get invoked - and therefore it will tend to atrophy,
even if its normal development doesn't depend on somatosensory
feedback in the first place.

That could explain some odd 'coincidences'.


I think I might post this idea somewhere where it is actually on topic

>Is it pain and pleasure at bottom

Nope - the innate circuitry of our brain brings a lot more than that.
'Pain' and 'pleasure' are actually quite high level concepts, things
which we experience as 'good' or 'bad' only because we have the
information processing machinery that makes those associations.

>I think we will find out a lot yet. Beautiful, subtle stuff ;-) Too bad we are
>wasting so much on ugly, dumb stuff ;-/

'Ugly' and 'dumb' are themselves only subjective perceptions. If you
really want to know the truth, you must accept that it will not always
be what you want to hear.

>>I am also quite skeptical about IA claims.
>Yes, but AI doesn't have to be all that "I" to have a huge economic and social impact.

I thought IA != AI, though I have to admit I'm not sure what IA stands
for. Instrumentalist something-or-other?

As for AI, I'd say I agree. Having a human-style consciousness is not
necessarily a practical asset for an intelligent machine.

Steve Horne

steve at ninereeds dot fsnet dot co dot uk

More information about the Python-list mailing list