AI and cognitive psychology rant (getting more and more OT - tell me if I should shut up)

Stephen Horne $$$$$$$$$$$$$$$$$ at $$$$$$$$$$$$$$$$$$$$.co.uk
Thu Oct 16 06:30:39 EDT 2003


On Wed, 15 Oct 2003 13:42:22 GMT, Alex Martelli <aleax at aleax.it>
wrote:

>Stephen Horne wrote:
>   ...
>>>no understanding, no semantic modeling.
>>>no concepts, no abstractions.
>> 
>> Sounds a bit like intuition to me.
>
>Not to me.  E.g., an algorithm (say one of those for long multiplication)
>fits the list of no's (it may have been _invented_ or _discovered_ by some
>human using any of those -- or suggested to him or her in a drunken stupor
>by an alien visitor -- or developed in other ways yet; but that's not
>relevant as to what characteristics the algorithm itself exhibits) but
>it's nothing like intuition (humans may learn algorithms too, e.g. the
>simple one for long multiplication -- they're learned and applied by
>rote, "intuition" may sometimes eventually develop after long practice
>but the algorithm when correctly followed gives the right numbers anyway).

What is your definition of intuition?

My dictionary has two definitions, and I'll add a third which seems
fairly common in psychology...

1.  instinctive knowledge
2.  insight without conscious reasoning
3.  knowing without knowing how you know

The first definition isn't required by my typical use of the word - it
may or may not be the case. 'Instinctive' is in any case a poorly
defined word these days - it may or may not refer to innate
characteristics - so I prefer to avoid it and state 'innate'
explicitly if that is what I mean.

The third definition will tend to follow from the second (if the
insight didn't come from conscious reasoning, you won't know how you
know the reasoning behind it).

Basically, the second definition is the core of what I intend and
nothing you said above contradicts what I claimed. Specifically...

>>>no understanding, no semantic modeling.
>>>no concepts, no abstractions.

...sounds like "knowing without knowing how you know".

Intuitive understanding must be supplied by some algorithm in the
brain even when that algorithm is applied subconsciously. I can well
believe that (as you say, after long practice) a learned algorithm may
be applied entirely unconsciously, in much the same way that (after
long practice) drivers don't have to think about how to drive.

Besides, just because a long multiplication tends to be consciously
worked out by humans, that doesn't mean it can't be an innate ability
in either hardware or software.

Take your voice recognition example. If the method is Markov chains,
then I don't understand it as I don't much remember what Markov chains
are - if I was to approach the task I'd probably use a Morlet wavelet
to get frequency domain information and feature detection to pick out
key features in the frequency domain (and to some degree the original
unprocessed waveform) - though I really have no idea how well that
would work.

However, I don't need any details to make the following
observations...

The software was not 'aware' of the method being used - the Markov
chain stuff was simply programmed in and thus 'innate'. Any 'learning'
related to 'parameters' of the method - not the method itself. And the
software was not 'aware' of the meaning of those parameters - it
simply had the 'innate' ability to collect them in training.

This reminds me a lot of human speach recognition - I can't tell you
how I know what people are saying to me without going and reading up
on it in a cognitive psychology textbook (and even then, the available
knowledge is very limited). I am not 'aware' of the method my brain
uses, and I am not 'aware' of the meaning of the learned 'parameters'
that let me understand a new strong accent after a bit of experience.

The precise algorithms for speach recognition used by IBM/Dragons
dictation systems and by the brain are probably different, but to me
fussing about that is pure anthropocentricity. Maybe one day we'll
meet some alien race that uses a very different method of speach
recognition to the one used in our brains.

In principle, to me, the two systems (human brain and dictation
program) have similar claims to being intelligent. Though the human
mind wins out in terms of being more successful (more reliable, more
flexible, better integrated with other abilities).


>> Of course it would be nice if
>> computers could invent a rationalisation, the way that human brains
>> do.
>> 
>> What? I hear you say...
>
>You must be mis-hearing (can't be me saying it) because I'm quite aware
>of the role of rationalization, and even share your hypothesis that it's
>adaptive for reasons connected to modeling human minds.  If I have a
>halfway decent model of how human minds work, my socialization is thereby
>enhanced compared to operating without such a model; models that are
>verbalized are easier to play what-if's with, to combine with each other,
>etc -- they're generally handier in all of these ways than nonverbal,
>"intuitional" models.  (You focus on the models OTHER people have of
>me and our "need for excuses to give to other people", but I think that
>is the weaker half of it; our OWN models of people's minds, in my own
>working hypothesis, are the key motivator for rationalizing -- and our
>modeling people's minds starts with modeling OUR OWN mind).

That might work if people were aware of the rationalisation process -
but remember the cut corpus callosum example. When asked to explain
'why?' the person didn't say 'I don't know' or 'I just felt like it' -
the logical self-aware explanations if the genuine explanation is not
available to the side of the brain doing the explaining.

The side of the brain doing the explaining was aware that the person
had started moving towards the goal (the coffee) but was not aware of
feeling thirsty (unless the experimenters were extremely sloppy in
setting up the test conditions) so the explanation of "I felt thirsty"
shows a *lack* of self awareness.

But giving non-answers such as "I don't know" or "I just felt like it"
tends not to be socially acceptable. It creates the appearance of
evasiveness. Therefore, giving accurate 'self-aware' answers can be a
bad idea in a social context.

This is actually particularly pertinent to autism and Asperger
syndrome. People with these disorders tend to be too literal, and that
can include giving literally true explanations of behaviour. This is
directly comparable with this level of automatic rationalisation (the
autistic may well not have an automatic rationalisation of behaviour)
though of course in most situations it is more about a lack of
anticipation of the consequences of what we say and a lack of ability
to find a better way to say it (or better thing to say).

>> 1.  This suggests that the only human intelligence is human
>>     intelligence. A very anthropocentric viewpoint.
>
>Of course, by definition of "anthropocentric".  And why not?

Because 'the type of intelligence humans have' is not, to me, a valid
limitation on 'what is intelligence?'.

Studying humanity is important. But AI is not (or at least should not
be) a study of people - if it aims to provide practical results then
it is a study of intelligence.

>connection: that "proper study of man" may well be the key reason
>that made "runaway" brain development adaptive in our far forebears --

Absolutely - this is IMO almost certainly both why we have a
specialised social intelligence and why we have an excess (in terms of
our ancestors apparent requirements) of general intelligence.
Referring back to the Baldwin effect, an ability to learn social stuff
is an essential precursor to it becoming innate.

But note we don't need accurate self-awareness to handle this. It
could even be counter productive. What we need is the ability to give
convincing excuses.

>(to the point where
>head size became a very serious issue during birth).

This is also possibly relevant to autism. Take a look here...

http://news.bbc.co.uk/1/hi/health/3067149.stm

I don't really agree with the stuff about experience and learning as
an explanation in this. I would simply point out that the neurological
development of the brain continues after birth (mainly because we hit
that size-of-birth-canal issue some hundreds of thousands of years
ago), and the processes that build detailed innate neural structures
may well be disrupted by rapid brain growth (because evolution has yet
to fix the problems that have arisen at the extreme end of the
postnatal-brain-growth curve).

>area -- what kind of mental approach / modeling is used for purposes
>of socialization vs for other purposes.

Yes - very much so.

>But the point remains that we don't have "innate" mental models
>of e.g. the way the mind of a dolphin may work, nor any way to
>build such models by effectively extroflecting a mental model of
>ourselves as we may do for other humans.

Absolutely true. Though it seems to me that people are far to good at
empathising with their pets for a claim that human innate mental
models are completely distinct from other animals. I figure there is a
small core of innate empathy which is quite widespread (certainly
among mammals) - and which is 'primitive' enough that even us with
Asperger syndrome are quite comfortable with it. And on top of that,
there is a learned (but still largely intuitive) sense about pets that
comes with familiarity.

And as I tried to express elsewhere in different terms, I don't even
think 'innate mental models' is fair - perhaps 'innate mental
metamodels' would be closer.

>> 2.  Read some cognitive neuroscience, some social psychology,
>>     basically whatever you can get your hands on that has cognitive
>>     leanings (decent textbooks - not just pop psychology) - and
>
>Done, a little (I clearly don't have quite as much interest in the
>issue as you do -- I spread my reading interests very widely).

I have the dubious benefit of a long-running 'special interest' (read
'obsession') in this particular field. The practical benefit is
supposed to be finding solutions to problems, but it tends not to work
that way. Though social psychology obviously turned out to be a real
goldmine - actual objective (rather than self-serving) explanations of
the things going on in peoples minds in social situations!!!

The only problem is that if you apply social psychology principles to
understand people, you may predict their behaviour quite well but you
absolutely cannot explain your understanding that way - unless, of
course, you like being lynched :-(

>But that's the least of issues in practical play.  I may well be
>in the upper tenth of a centile among bridge players in terms of
>my ability to do such mental computations at the table -- but that
>has very little effect in terms of success in at-the-table results.
>My published theoretical results (showing how to _brute-force_ do
>the probabilistic estimation of "hand strength", a key issue in
>the bidding phase that comes before the actual play) are one thing,
>what I achieve at the table quite another:-).

Yes, but I never said all learned algorithms are intuitive.

This reminds me of something about learning with mnemonics and
methods. I don't remember the source, but the idea was basically that
the mnemonics, step-by-step instructions etc were there to allow a
stage of learning - the stage where you have to think things through.
In time, the explanation goes, you don't need the mnemonics and
step-by-step methods as handling the problem simply becomes natural.

In contexts where this has worked for me, I would say the final
intuition goes beyond what the original rules are capable of. ie it
isn't just a matter of the steps being held in procedural memory.
Presumably heuristics are learned through experience which are much
better than the ones verbally stated in the original rules.

>And I sure DO know I don't mentally deal a few tens of thousands
>of possibilities in a montecarlo sample to apply the method I had
>my computer use in the research leading to said published results...;-)

How sure are you of that? After all, the brain is a massively parallel
machine.

Look at what is known about the prefrontal cortex and you may end up
with the impression that you are looking at a blackboard-based expert
system - a set of specialist 'intelligences' which observe a common
working memory and which make additions to that working memory when
they spot problems they are able to solve. The ideas going into that
working memory seem to be the things that we are consciously aware of.

When information has to make round trips to the working memory, I
would expect it to be pretty slow - just like most conscious thought
processes.

But what if something is being handled entirely within a single
specialist intelligence unit? That unit may well act rather like a
fast 'inner loop' - testing possibilities much quicker than could be
done consciously. That doesn't mean it has to be innate - the brain
does develop (or at least adapt) specialist structures through
learning.

My guess is that even then, there would be more dependence on
sophisticated heuristics than on brute force searching - but I suspect
that there is much more brute force searching going on in peoples
minds than they are consciously aware of.


-- 
Steve Horne

steve at ninereeds dot fsnet dot co dot uk




More information about the Python-list mailing list