AI and cognitive psychology rant (getting more and more OT - tell me if I should shut up)

Andrew Dalke adalke at mindspring.com
Sat Nov 8 08:16:52 EST 2003


Stephen Horne
> >>The brain really does change during sleep. In a way, you literally are
> >>not the same person when you wake up as when you went to sleep.

Some ways are more meaningful than others.  I obviously
breath, providing oxygen to burn parts of my body and thus
provide energy to survive.  Therefore I 'literally' am not the same
person.  Similarly, the me of time T experiences events which
happened at time T-1, which the me of time T-2 does not know,
so the state of my neurons and muscles have changed.

Is that a useful of "me"?  In some cases, yes.  In most cases, no.

> And yet consciousness itself changes dramatically, sometimes from
> second to second, and we still feel a sense of continuity.
>
> How can we explain this sense of continuity?
>
> One way is to appeal to the 'god of the gaps' - to claim some
> currently unknown entity which we cannot yet observe must provide the
> real experience of consciousness.
>
> The other way is to look at the pragmatics of social behaviour. A huge
> amount of social behaviour depends on that sense of continuity - that
> feeling that (me today) == (me tomorrow) == (me yesterday).

Only two ways?  Perhaps there just are no gaps?  Consider
evolution.  We do not have every fossil for every ancestor of,
say, a horse, so there are always gaps in the record (what is
the gap between you and your father?)  But the prediction that
there is something in between has fit nicely with both traditional
cladistics and sequence-based phylogenies.

Plus, you are shaking the wrong end of the stick.  The opposite
is likely more true; people have a strong sense of continuity so
we have developed social structures which reflect that.  If,
for example, personalities were randomally exchanged during
sleep then we would have ended up with a quite different
culture.

(I'm an avid science fiction reader but I can't recall a
story built around this.  There are stories where one or a
small number of people change bodies, but not the whole
culture.  It would be even harder if the full personality of a
body changed every day to a new one -- how would we,
as personality-centered creatures, identify with a being
which every day had a new personality?  The closest I
can think of is Vinge's packs in 'A Fire Upon the Deep',
where the personality can change as pack members join
and leave the pack, or Brin's ... the waxy ring people in
the Uplift stories.. where each ring contributes to the
full personality and, ahem, one ring can rule them all.)

> Having a sense of continuity of self is, in other words, pretty
> directly linked to the practical information processes involved in
> maintaining a reputation and social status.

Again, I'll argue it's the other way around.

> Yes, and so long as you can keep your theory in the gaps and well away
> from scientific probing, no-one can prove it wrong, can they?

You suggested people fill in the gaps of consciousness.
You said they exist because, for example, electrical stimulation
of part of the brain would cause movement, and asking the person
why that movement occurs would get a post hoc reason along
the lines of "well, I wanted to move my arm."

That doesn't necessarily mean that there was no consciousness,
only that parts of actions are always not under direct conscious
control.  Stimulate my eye lids to blink and ask me why I blinked.
I'll say "because my eyes were dry?"  That doesn't mean that
there was no sense of consciousness at that time.

Or ask someone to stop tapping a pencil and that person
might respond with "I'm sorry, I didn't realize I was doing
that."

Or when I drove across country along you might ask me
what I did during that time and I'll repond "I don't remember;
just zoned out."  Was I not conscious during that time or
did I simply decide it wasn't worth remembering?

> Lets consider the 'alters' in multiple personality disorder. This has
> been studied under a brain scanner. When alters switch, the activity
> in the hippocampus changes dramatically. If the 'primary' personality
> is active, the hippocampus is more active.

This work is deeply conjectural.  I have no training at all in the
subject (err, I do know some of the theory behind brain scanning)
but I just recently read a "Straight Dope" article
  http://www.straightdope.com/columns/031003.html
which says

   Multiple personality disorder, now officially known as dissociative
   identity disorder (DID), remains the object of bitter controversy.
   One thing's clear, though--it's not nearly as common as people
   thought just a few years ago.
      ...
  The question remains: Are multiple personalities ever real? The
  debate still rages. Skeptics claim that alters are invariably
   induced by the therapist; the more respectable defenders of
   DID agree that many are, but not all. The controversy has been
   complicated by disagreement over the nature of personality. The
   common understanding of DID is that the alters are independent
   of one another and don't share memories and other cognitive
   processes, but demonstrating this has proven difficult. Speech and
   behavior are under conscious control, so changes can readily be
   faked. Even things like brain-wave patterns may vary not because
   of a genuine personality switch but because alleged alters cultivate
   different emotional states and different ways of acting out.

(Note those last two lines ;)

> Does this really satisfy your need for the mind to be something more
> than information processing?

One interpretation of Bengt's statements is that this higher-level
structure may be modeled in its own right, like phonons, or
Cooper pairs in supercondutors, or boolean logic gates created out
of analog silicon with impurities, or the Great Red Spot on Jupiter.

This doesn't make them 'something more', only something distinct.
And there's no reason these can't be studied with an information
processing model as well.

(However, my reading suggests that his statements are too
vague to know which interpretation is correct, and I'm leaning
towards agreeing that he meant what you thought it to mean.)

> The information processing theory can take MPD in its stride, of
> course. If you keep your memories in several self-consistent but
> distinct chunks then, even if the same neural machinery handles the
> processing, you get a different 'self' by simply switching which sets
> of memories are active.

But if MPD really is as rare as Uncle Cecil says, then
informational processing theory is not a useful *predictor* of
human behaviour, because it makes no statements about
the likelyhood of a given event.  Why don't I have acraphobia
today, arachnaphobia tomorrow, and agoraphobia the next?
Why doesn't everyone have strong MPD?

And a requirement for being a useful theory is that it be
able to make predictions.

(Off this off-topic thread; in 'Call of Cthulhu' as I recall there
was a Latinate word for 'the fear that gravity would reverse'.
I can't find a reference to it.  Any pointers?)

> Basically, as I said before, you seem to be appealing to the 'god of
> the gaps'. You find the idea of the mind as an information processor
> unapalatable, but - as with many people who claim not to be religious
> per se - you don't want to appeal to an actual god or soul. So instead
> you appeal to an undiscovered type of physics.

Suppose some day there is artificial intellegence which requires
so much computer power that it takes in 1 second of input then
spends 10 minutes processing it.  That makes for very large gaps.
>From an information processing model, these gaps do not exist
because time does not come into play in those models (that I've
seen).  But it does occur and does not require a 'god of the gaps'.

(Again, I suspect that I am arguing a different viewpoint than
Bengt.)

> Can I explain why people shouldn't be comfortable with being a part of
> the physical world, and of having minds that are explainable in
> information processing terms? Obviously there's the sense of agency
> and illusion of free will, but while they might explain a
> misinterpretation they don't explain a continuing strong conviction,
> so what else?

What's information?  The definitions I know of come from Shannon,
the definition of entropy in thermodynamics, and the surface area
of the event horizon of a black hole, and as I recall it's philosophically
appropriate to wonder if (or that) they are the same thing.

How then do you turn these definitions into a useful model for
intelligence?  I suspect you do so by first assuming a boolean
algebra.  It requires a lot of entropy to process one bit (because
of the need for certainty), so you are already using a higher
level approximation of the underlying physics.

And as Michele pointed out, some things can be explained well
with a higher level field equation which does not accurately
model the smaller scale behaviour.

> There is a concept called the 'expanding circle' which relates to who
> is and who is not considered a 'person'. I put that in quotes because
> it is far from literal - humans are often excluded, and (particularly
> in modern times) animals etc are very often included.
>
> Basically, it refers to an intuitive sense of who has rights, who you
> can empathise with, etc. When you can torture and kill 'the enemy'
> without being traumatised yourself (assuming you are not simply a
> sociopath) it is a clear sign that 'the enemy' are outside of your
> circle, for instance.

It is difficult for me to throw a book away, break its spine, or
otherwise destroy it.  I wanted to get rid of a few and ended up
telling a friend of mine to do it for me, because I couldn't do it
myself.  These books were given to me by family, but I was
not going to reread them.  Does that make the books a 'person'?
Does that extend my relative's personhood into my books?

Suppose someone it told to destroy a work of art which took
that person 10 years of devotion to create.  It's likely that
that would cause trauma.  Does that make the work of art
a person to that artist?

> This intuitive sense has clear practical evolutionary value - being
> willing to kill others in your tribe without batting an eyelid would
> obviously not suit a social species, yet it would be hard to carry out
> intertribal warfare if you empathised with your enemies. And this is
> complicated by the fact that it seems tribes did make both short and
> longer term alliances - you couldn't rely on an individual being a
> stranger, but needed to make more flexible classifications.

You have a naive view of what sociobiology might do,
biased no doubt by being brought up in this culture.

The restriction for 'survival of the fittest' is to increase the chances
that your genes will be propogated.  There's no reason that
cannot happen in a social species.  Newly dominant gorillas,
as I recall, will kill the infants which aren't his.  (No references
though; I should reread my Gould books.)

And in Jared Diamond's "Guns, Germs and Steel" he
mentioned a woman he met in New Guinea, from a tribe
only recently out of the hunter/gather stage whose 2nd
husband was killed by his brother, so that he could be
her 3rd husband.

Or consider Shakespeare's McBeth, where the king's brother
killed the king to become the new king.  We don't
immediately and instinctively reject that as condition which
cannot occur under human relationships, meaning that it
isn't prohibited by a million years of Darwinistic evolution.

Consider the Medici or Borgia families, or for
that matter much of the ruling Europeans.  Just how
many of them died from the hand of family than from
outsiders.  (Or consider Prince Caspian from the
Narnia book of the same name.  :)

Wild dogs are social creatures.  This page
  http://www.szgdocent.org/aa/a-wildog.htm
says that "wounded members have been known to
be killed by their pack" which is in contradiction to
your direct statement that "being willing to kill others
in your tribe without batting an eyelid would obviously
not suit a social species."

In the Bible, Abraham almost sacrificed his son Isaac,
  http://en.wikipedia.org/wiki/Near_sacrifice_of_Isaac
and of course Cain killed his brother Abel.

There are plenty of cases where one family member
killed another, making it hard to argue that the
prohibition comes from evolutionary reasons.

Plus, as a male it is evolutionary advantageous to
impregnate non-tribal women instead of killing
them, so you'll need to modify that clause as well.

> This isn't the only intuitive classification the mind makes, of
> course. There appear to be clear intuitive distinctions between
> animals, plants, and inanimate objects for instance. These
> distinctions seem to be innate and instinctive, though there is
> flexibility in them.

That's very culturally biased.  Consider some Native American
languages which have different conjugations for things which
are alive vs. things which are not.  (I think Navajo is one such.)
As I recall, clouds are alive.

In any case, that intuition breaks down because some things
are not animals, not plants, and not inanimate.  Eg, for living
things we must also include bacteria and archeobacteria,
there's also viruses in the grey area.  For animate vs.
inanimate, when does an animate volcano become an
inanimate mountain?  Is the Moon animate?  What about
a waterfall?  A tornado?

> If these are general intuitive principles, it is no surprise that when
> you introspect you find it hard to accept that your mind is not
> distinct from an information processing machine. Your mind naturally
> classifies your self in the intuitive category of 'person'.

Again, a culture bias.  It's hard for many people to accept
that they are animals, or that humans and other hominds had
a common ancestor.  Yet so far there is much more evidence
for evolution than there is that an "information processing
theory" gives a similarly useful model for understanding
the brain.

To expand upon that, as a good reductionist and (ex-)physicist,
I think in principle the human body, including the brain, can
be modeled from first principles using quantum mechanics.
However, that cannot be done in realistic time so we must use
approximations, and those approximations may be extremely
good (as in 99.99%+) because the higher levels have an
appropriate 'field theory'.  Eg, weather prediction doesn't
require an atomic theory.

You have postulated such an "information processing theory"
but not stated what that theory means .. and given yourself
an out by saying that it can be expanded to include new
behaviour.  Without explanation, it's hard to judge if your
idea is correct or not.  Without explanation, I can see that
"information processing theory" is identical to the laws of
thermodynmics (where entropy == information) and then
point out that you are omitting many parts of physics,

> Basically, it would be surprising if most people didn't resist the
> idea of being, in principle, little different to a computer.

Basically, it would be surprising if most people didn't resist the
idea of being, in principle, little different to another person.  Or
little different to a bonobo monkey, or little different than a dog,
or little different to yeast.  But under some definitions these
are little enough.

> As for me, well perhaps those classification systems don't work so
> well in people with autism spectrum disorders. Maybe that is why we
> are overliteral in applying moral rules as well as other areas - we
> don't really have the same intuitive sense that everyone else has of
> the differences in how people, animals and inanimate objects should be
> treated.

But those classifications systems are invalid, and the determination
of the boundaries between classes are culturally determined.
(Hence the rejection of the idea of a platypus when it was
first presented in Europe, but the full acceptance of it now)

I suspect it's more a matter of not knowing enough about
animal behaviour and evolutionary biology.  You may want to
read some of the Gould books.  There are several others I
could suggest, but it's been about 10-15 years since I read
them and I can't recall them now.

> Maybe that is a part of the 'theory of mind' issue.

That's too, but you are using evoluationary arguments
and need a better background in evolution both theoretical
and as observed.  (Eg, insects pull off just about every
evolutionary trick in the book, and bacteria can pull off almost
all of the rest.  Humans, though, have learned a few new
ones ;)

> So when I introspect, it doesn't particularly bother me to see myself
> as a (broken) machine designed and built to carry genes from one
> generation to the next, and neither does it particularly bother me to
> see my mind and consciousness as a product of information processing.

"A chicken is an egg's way of making more eggs."

Again, the problem with both those views is that they don't
provide much predictive power.  I have no problems seeing
myself as a self-organized meta-stable collection of atoms
living in a collapsing wave function, but for just about everything
I do that knowledge doesn't help me much.

Oh, and the definition of "broken" in an evolutionary sense means
your genes aren't passed on -- even if you don't have children,
if you help ensure that two or more of your nieces and nephews
do reproduce then you are an evolutionary success.  Plus, you
should drop the "designed" because that implies a designer.
Saying "as a machine to carry genes" would be just fine.

> To put it another way, when looking at a picture you may feel that you
> are aware of the whole picture, but you are not. Big changes can be
> made to the picture without you noticing, as long as they don't effect
> the precise detail that you are focussing on at that precise moment.
> It is called change blindness.

Here's a neat Javas-based demo of change blindness.
  http://www.usd.edu/psyc301/ChangeBlindness.htm

One of them (with the sphinx) I got right away.  Another, with
the couple eating dinner, I didn't get even though I was
looking at the spot that change -- I just though "something
changed... but what?"  And with the cemetary one I figured
it out because the scene was wrong.

But why deal with change blindness?  Close one eye.
Do you see your blind spot?  (I spent hours in high school
practicing to see it.  I think I annoy optomitrists when I
say 'and now the pointer has entered my blind spot'  ;)

> A big part of the sense that something is 'conscious' is actually
> illusory in itself - things that can be immediately brought into
> consciousness tend to be classified by the mind as currently conscious
> because, in the practical sense, they may as well be. So with change
> blindness, you 'experience' the whole picture mainly because you
> intuitively know that you can focus on any part of it in an instant.

What you say doesn't necessarily follow from that.  Consider
this image processing model.  The eye does image processing
which is not under direct conscious control.  This reduces the
scene into a simplified model, and access the brain's internal
model of how things should be to fill in details, like fill in the
blind spot.  We do have some ability to influence things, but
it isn't under full control.

We are conscious of this model, but don't realize that it's
only an approximation to the actual data coming in.  Isn't
this just as valid as your description, but leading to a different
conclusion?

(What I did here was pull a Chinese room on the problem,
and redefine that consiousness takes place after image
processing.  Personally I'm fine with saying that part of my
consciousness exists at times even outside my head, as with
a dog to wake me up if there are intruders, or even other
people, to provide insight I find hard on my own.)

> The 'whole soup experience' is basically an illusion, or maybe a
> 'collective noun' for the lower level perceptions that you had (or
> didn't have) while eating the soup.

A problem I have with your definition is that you can say "illusion"
but have no way to define what is anything besides an illusion.  That
makes the word useless in terms of descriptive power, and could
just as easily use the word "interpretation", which doesn't have
the connotations that it's false.  Plus, if you push "illusion" too much
you end up in solipsism, which is just plain boring.

> Pick up a pencil and run it over a rough surface. Most people will
> quite quickly start experiencing the bumps as being at the tip of the
> pencil, as if the pencil were an extension of the body.
>
> Phantom limbs also have some significance to this.

The pencil *is* an extension of the body, no?  So is putting gloves
on.  Or driving a car.  I'm having a problem with your use of
the phrase "as if".

In your list of significance, consider also prosthetic limbs,
and limbs which have "fallen asleep."

> Basically, there is nothing here which I don't see as information
> processing.

What in the universe *don't* you consider as information processing?
Why not?

Eg, if information is entropy then everything in the Universe
is information processing, meaning your model has no more
predictive power than thermodynamics.

> You might like to read some Rita Carter stuff - this is my easiest
> reference because, as I mentioned before, I'm currently reading her
> book "consciousness".

Based on the statements you've made, I have distrust in what you've
been reading, or at least your interpretation of that data.  So I was
biased in my search and found a page critical of one of her books, at
  http://human-brain.org/mapping.html
It says of her some of the same things I've complained about here,
like a problem with definitions, and a tendency to give only one
interpretation when many are possible and equally likely.  This
may be okay for a general science book, on the justification
that it provides a possible world view, but it doesn't mean that
that's correct.

(Ain't 2nd hand put downs fun?  :)

> Body language is quite distinct from the kind of gesture that occurs
> in sign language. As I understand it, most body language is not
> generated by Brocas area.

I personally don't know.  I know evolution much better than I
do neural models.  Google found
  http://cogweb.ucla.edu/ep/GestureLanguage.html
] Brain imaging has shown that a region called Broca's area, which
] is important for speech production, is active not only when we
] speak, but when we wave our hands. Conversely, motor and
] premotor areas are activated by language tasks even when those
] tasks have been robbed of their motor aspect--during silent
] reading, for instance--and particularly by words that have a
] strong gestural component, such as verbs and tool names.
]
] Impairments of language and coordination are closely related,
] too. People with a condition called Broca's aphasia can put
] names to things but have trouble stringing sentences together.
] They show similarly impoverished gestures, indulging less in
] hand movements that relate to the flow or rhythm of the
] accompanying speech than to those that convey its content.

> Also, the use of muscles isn't what this is about. Even when you here
> a word, or think about the concept, your brain generates the
> associated action potentials - but inhibits the actual muscular
> actions.

As far as I can tell, you haven't said what an 'action potential' is.
I assumed it was something related to synapse signalling, which
the brain must do to work.  Now it appears to be something else
because some words, like "star" or "python programming
language" or "tastes of anise" have no corresponding muscular
actions.

> >>This makes good sense. Evolution always works by modifying and
> >>extending what it already has. The mental 'vocabulary' has always been
> >>in terms of the bodily inputs and outputs, so as the capacity for more
>
> >"Always" is a way to prune your thought tree. Careful ;-)
>
> It is also very much the right word in this case. Sophisticated useful
> abilities in complex organisms do not arise through fluke mutations,
> much as broken fragments of china do not suddenly join themselves into
> a perfect vase and leap up onto your wobbly shelf. It's called the law
> of entropy.

There are too many things going on here.  First, assume the
Universe is a closed system.  Evolution works in the Universe, so
of course it always 'modifies and extends' things in the Universe --
it cannot do otherwise.

Second, 'it' can mean a single organism or species, when things
like gene transfer between bacteria show that evolution doesn't
quite fit that nice categorization.

Third, 'evolution' doesn't modify things, things like errors during
{D,R}NA copying, chemical mutagens, and radation modify things.
Evolution describes how the local ecological landscape can have
different effects on the different mutations can cause speciation
or extinction.  This is less personified than you use.

And finally, YES MOST DEFINITELY "sophisticated useful
abilities in complex organisms DO arise through fluke mutations."
THAT IS A FUNDAMENTAL IDEA OF EVOLUTIONARY
THEORY!

Random mutations + ecological pressures => evolution

The reference to entropy (by which you likely mean the
second law of thermodynamics) is for a closed system
and a TOTALLY INAPPROPRIATE METAPHOR.
Locally we have this thing called the Sun, which makes
the Earth be an open system.

(My apologies for the yelling, but I wanted to get my point
across that what you said was absolutely counter to
evolutionary theory.)

> Evolution doesn't throw away what it already has and start again from
> scratch. It seeks local maxima - it cannot anticipate long term
> advantages.

Stop anthropomorphising.

Abilities are definitely lost due to evolution, unless you can
indeed breath underwater?

As to "start again from scratch", I don't know enough give
a good example, but what about dolphins?  They like us were
derived from a fish, but lost many of the sea going abilities
of a fish, which have now been regained.  It cannot be that
these were all done in the same way,  because dolphins
are not fish, and it's almost certain that there was parallel
evolution for some of the traits.

(I also seem to recall some legless lizards first losing then
regaining then losing legs, but again I don't know enough
of the details.)

What to you would qualify as "starting from scratch"?
Here's one theoretical model:

the parent has gene X which undergoes a gene duplication
event, so there are now two copies of X.

X is important so one of those genes remains unmodified
while the other is free to mutate into Y.

While once useful, Y is no longer needed, and when a
retrovirus comes in it inserts itself into Y with no ill
effects.

Conditions change again so that the X->Y transition
is helpful and prefered.

During that time, X undergoes another duplication event
and the new X is free to follow evolutionary pressures
to be a new version of Y.

I see no reason this couldn't happen, and it's exactly
the case where a gene was thrown away then recreated.

                    Andrew
                    dalke at dalkescientific.com






More information about the Python-list mailing list