AI and cognitive psychology rant (getting more and more OT - tell me if I should shut up)

Stephen Horne steve at ninereeds.fsnet.co.uk
Thu Oct 23 17:55:10 EDT 2003


Sorry for the long delay. Turns out my solution was to upgrade to
Windows XP, which has better compatibility with Windows 98 stuff than
Windows 2000. So I've had some fun reinstalling everything. On the
plus side, no more dual booting.

Anyway...


On Thu, 16 Oct 2003 18:52:04 GMT, Alex Martelli <aleax at aleax.it>
wrote:

>Stephen Horne wrote:
>   ...
>>>>>no understanding, no semantic modeling.
>>>>>no concepts, no abstractions.
>>>> 
>>>> Sounds a bit like intuition to me.
>   ...
>> What is your definition of intuition?
>
>I can accept something like, e.g.:
>
>> 2.  insight without conscious reasoning
>> 3.  knowing without knowing how you know
>
>but they require 'insight' or 'knowing', which are neither claimed
>nor disclaimed in the above.

You can take 'insight' and 'knowing' to mean more than I (or the
dictionary) intended, but in this context they purely mean having
access to information (the results of the intuition).

Logically, those words simply cannot mean anything more in this
context. If you have some higher level understanding and use this to
supply the answer, then that is 'conscious reasoning' and clearly
shows that you you know something of how you know (ie how that
information was derived). Therefore it is not intuition anymore.

Of course the understanding could be a rationalisation - kind of a
reverse engineered explanation of why the intuition-supplied answer is
correct. That process basically adds the 'understanding' after the
fact, and is IMO an everyday fact of life (as I mentioned in an
earlier post, I believe most people only validate and select
consciously from an unconsciously suggested subset of likely solutions
to many problems). However, this rationalisation (even if backdated in
memory and transparent to the person) *is* after the fact - the
intuition in itself does not imply any 'insight' or 'knowing' at any
higher level than the simple availability of information.

>There are many things I know,
>without knowing HOW I do know -- did I hear it from some teacher,
>did I see it on the web, did I read it in some book?  Yet I would
>find it ridiculous to claim I have such knowledge "by intuition":

Of course. The phrase I used is taken directly from literature, but
the 'not knowing how you know' is obviously intended to refer to a
lack of awareness of how the solution is derived from available
information. Memory is obviously not intuition, even if the context in
which the memory was laid down has been forgotten. I would even go so
far as to suggest that explicit memory is never a part of intuition.
Heuristics (learned or otherwise) are not explicit memories, and
neither is the kind of procedural memory which I suspect plays a
crucial role in intuition.

One thing that has become clear in neuroscience is that almost all
(perhaps literally all) parts and functions of the brain benefit from
learning. Explicit memory is quite distinct from other memory
processes - it serves the conscious mind in a way that other memory
processes do not.

For instance, when a person lives through a traumatic experience, a
very strong memory of that experience may be stored in explicit memory
- but not always. Whether remembered or not, however, that explicit
memory has virtually nothing to do with the way the person reacts to
cues that are linked to that traumatic experience. The kind of memory
that operates to trigger anxiety, anger etc has very weak links to the
conscious mind (well, actually it has very strong ones, but only so
that it can control the conscious mind - not the other way around). It
is located in the amygdala, it looks for signs of danger in sensory
cues, and when it finds any such cues it triggers the fight-or-flight
stress response.

Freudian repression is a myth. When people experience chronic stress
over a period of years (either due to ongoing traumatic experience or
due to PTSD) the hippocampus (crucial to explicit memory) is damaged.
The amygdala (the location of that stress-response triggering implicit
memory) however is not damaged. The explicit memory can be lost while
the implicit memory remains and continues to drive the PTSD symptoms.

It's no surprise, therefore, that recovered memories so often turn out
to simply be false - but still worth considering how this happens.
There are many levels. For instance, explicit memories seem to be
'lossy compressed' by basically factoring out the kinds of context
that can later be reconstructed from 'general knowledge'. Should your
general knowledge change between times, so does the reconstructed
memory.

At a more extreme level, entire memories can be fabricated. The harder
you search for memories, the more they are filled in by made up stuff.
And as mentioned elsewhere, the brain is quite willing to invent
rationalisations for things where it cannot provide a real reason. Add
a psychiatrist prompting and providing hints as to the expected form
of the 'memory' and hey presto!

So basically, the brain has many types of memory, and explicit memory
is different to the others. IMO intuition uses some subset of implicit
memory and has very little to do with explicit memory.

>> The third definition will tend to follow from the second (if the
>> insight didn't come from conscious reasoning, you won't know how you
>> know the reasoning behind it).
>
>This seems to ignore knowledge that comes, not from insight nor
>reasoning, but from outside sources of information (sources which one 
>may remember, or may have forgotten, without the forgetting justifying
>the use of the word "intuition", in my opinion).

Yes, quite right - explicit memory was not the topic I was discussing
as it has nothing to do with intuition.

>> Basically, the second definition is the core of what I intend and
>> nothing you said above contradicts what I claimed. Specifically...
>
>I do not claim the characteristics I listed:
>
>>>>>no understanding, no semantic modeling.
>>>>>no concepts, no abstractions.
>
>_contradict_ the possibility of "intuition".  I claim they're very
>far from _implying_ it.

OK - and in the context of your linking AI to 'how the human brain
works' that makes sense.

But to me, the whole point of 'intuition' (whether in people or, by
extension, in any kind of intelligence) is that the answer is supplied
by some mechanism which is not understood by the individual
experiencing the intuition. Whether that is a built-in algorithm or an
innate neural circuit, or whether it is a the product of an implicit
learning mechanism (whether electronic/algorithmic or
neural/cognitive).

>> ...sounds like "knowing without knowing how you know".
>
>In particular, there is no implication of "knowing" in the above.

Yes there is. An answer was provided. If the program 'understood' what
it was doing to derive that answer, then that wouldn't have been
intuition (unless the 'understanding' was a rationalisation after the
fact, of course).

>You can read a good introduction to HMM's at:
>http://www.comp.leeds.ac.uk/roger/HiddenMarkovModels/html_dev/main.html

I haven't read this yet, but your description has got my interest.

>The software was not built to be "aware" of anything, right.  We did
>not care about software to build sophisticated models of what was
>going on, but rather about working software giving good recognition
>rates.

Evolution is just as much the pragmatist.

Many people seem to have an obsession with a kind of mystic view of
consciousness. Go through the list of things that people raise as
being part of consciousness, and judge it entirely by that list, and
it becomes just another set of cognitive functions - working memory,
primarily - combined with the rather obvious fact that you can't have
a useful understanding of the world unless you have a useful
understanding of your impact on it.

But there is this whole religious thing around consciousness that
really I don't understand, to the point that I sometimes wonder if
maybe Asperger syndrome has damaged that too.

Take, for instance, the whole fuss about mirror tests and the claim
that animals cannot be self-aware as they don't (with one or two
primate exceptions) pass the mirror test - they don't recognise
themselves in a mirror.

There is a particular species that has repeatedly failed the mirror
test that hardly anyone mentions. Homo sapiens sapiens. Humans. When
first presented with mirrors (or photographs of themselves), members
of tribes who have had no contact with modern cultures have
consistently reacted much the same way - they simply don't recognise
themselves in the images. Mirrors are pretty shiney things.
Photographs are colourful patterns, but nothing more.

The reason is simple - these people are not expecting to see images of
themselves and may never have seen clear reflected images of
themselves. It takes a while to pick up on the idea. It has nothing to
do with self-awareness.

To me, consciousness and self-awareness are nothing special. Our
perception of the world is a cognitive model constructed using
evidence from our senses using both innate and learned 'knowledge' of
how the world works. There is no such thing as 'yellow' in the real
world, for instance - 'colour' is just the brains way of labelling
certain combinations of intensities of the three wavebands of light
that our vision is sensitive to.

While that model isn't the real world, however, it is necessarily
linked to the real world. It exists for a purpose - to allow us to
understand and react to the environment around us. And that model
would be virtually useless if it did not include ourselves, because
obviously the goal of much of what we do is to affect the environment
around us.

In my view, a simple chess program has a primitive kind of
self-awareness. It cannot decide its next move without considering how
its opponent will react to its move. It has a (very simple) world
model, and it is aware of its own presence and influence in that world
model.

Of course human self-awareness is a massively more sophisticated
thing. But there is no magic.

Very likely your software was not 'aware' of anything, even in this
non-magical sense of awareness and consciousness. As you say - "We did
not care about software to build sophisticated models of what was
going on".

But that fits exactly my favorite definition of intuition - of knowing
without knowing how you know. If there were sophisticated models, and
particularly if the software had any 'understanding' of what it was
doing, it wouldn't be intuition - it would be conscious reasoning.

>People do have internal models of how people understand speech -- not
>necessarily accurate ones, but they're there.  When somebody has trouble
>understanding you, you may repeat your sentences louder and more slowly,
>perhaps articulating each word rather than slurring them as usual: this
>clearly reflects a model of auditory performance which may have certain
>specific problems with noise and speed.

I disagree. To me, this could be one of two things...

1.  A habitual, automatic response to not being heard with no
    conscious thought at all - for most people, the most common
    reasons for not being understood can be countered by speaking more
    loudly and slowly.

2.  It is possible that a mental model is used for this and the
    decision made consciously, though I suspect the mental model comes
    in more as the person takes on board the fact that there is a
    novel communication barrier and tries to find solutions.

Neither case is relevant to what I meant, though. People don't
consciously work on recognising sounds nor on translating series of
such sounds into words and scentences - that information is provided
unconsciously. Only when understanding becomes difficult such that the
unconscious solutions are likely to be erroneous is there any
conscious analysis.

And the conscious analysis is not a conscious analysis of the process
by which the 'likely solutions subset' is determined. There is no
doubt 'introspection' in the sense that intermediate results in some
form (which phenomes were recognised, for instance) are no doubt
passed on the the conscious mind to aid that analysis, and at that
stage a conscious model obviously comes into play, but I don't see
that as particularly important to my original argument.

Of course people can use rational thought to solve communication
problems, at which point a mental model comes into play, but most of
the time our speach recongition is automatic and unconscious.

Even when we have communications difficulties, we are not free to
introspect the whole speach recognition process. Rather some plausible
solutions and key intermediate results (and a sense if where the
problem lies) are passed to the conscious mind for separate analysis.

The normal speach recognition process is basically a black box. It is
able to provide intermediate results and 'debugging information' in
difficult cases - but there is no conscious understanding of the
processes used to derive any of that. I couldn't tell anything much of
use about the patterns of sound that create each phenome, for
instance. The awareness that one phenome sounds rather similar to
another doesn't count, in itself.

BTW - I hope 'phenome' is the right word. My dictionary has failed me
and a web search seems to see 'phenome' as something to do with
genetic. It is intended to refer to 'basic' sound components that
build up words, but I think I've got a bit confused.

>(as opposed to, e.g., the proverbial "ugly American" whose caricatural
>reaction to foreigners having trouble understanding English would be
>to repeat exactly the same sentences, but much louder:-).

I believe the English can outdo any American in the loud-and-slow
shouting at foreigners thing ;-)

>> The precise algorithms for speach recognition used by IBM/Dragons
>> dictation systems and by the brain are probably different, but to me
>
>Probably.
>
>> fussing about that is pure anthropocentricity. Maybe one day we'll
>
>Actually it isn't -- if you're aware of certain drastic differences
>in the process of speech understanding in the two cases, this may be
>directly useful to your attempts of enhancing communication that is
>not working as you desire.

Yes, but I was talking about what can or cannot be considered
intelligent. I was simply stating that in my view, a thing that
provides intelligent results may be considered intelligent even if it
doesn't use the same methods that humans would use to provide those
results.

I talk to my mother in a slightly different way to the way I talk to
my father. This is a practical issue necessitated by their different
conversational styles (and the kind of thing that seriously bugs
cognitive theorists who insist despite the facts that people with
Aspergers can never understand or react to such differences). That
doesn't mean that my mother and father can't both be considered
intelligent.

>  E.g., if a human being with which you're
>very interested in discussing Kant keeps misunderstanding each time
>you mention Weltanschauung, it may be worth the trouble to EXPLAIN
>to your interlocutor exactly what you mean by it and why the term is
>important; but if you have trouble dictating that word to a speech
>recognizer you had better realize that there is no "meaning" at all
>connected to words in the recognizer -- you may or may not be able
>to "teach" spelling and pronunciation of specific new words to the
>machine, but "usage in context" (for machines of the kind we've been
>discussing) is a lost cause and you might as well save your time.

Of course, that level of intelligence in computer speach recognition
is a very long way off.

>But, you keep using "anthropocentric" and its derivatives as if they
>were acknowledged "defects" of thought or behavior.  They aren't.

Not at all. I am simply refusing to apply an arbitrary restriction on
what can or cannot be considered intelligent. You have repeatedly
stated, in effect, that if it isn't the way that people work then it
isn't intelligent (or at least AI). To me that is an arbitrary
restriction. Especially as evolution is a pragmatist - the way the
human mind actually works is not necessarily the best way for it to
work and almost certainly is not the only way it could have worked. It
seems distinctly odd to me to observe the result of a particular roll
of the dice and say "this is the only result that we can consider
valid".

>see Timur Kuran's "Private Truths, Public Lies", IMHO a masterpiece
>(but then, I _do_ read economics for fun:-).

I've not read that, though I suspect I'll be looking for it soon.

>But of course you'd want _others_ to suppy you with information about
>_their_ motivations (to refine your model of them) -- and reciprocity
>is important -- so you must SEEM to be cooperating in the matter.
>(Ridley's "Origins of Virtue" is what I would suggest as background
>reading for such issues).

I've read 'origins of virtue'. IMO it spends too much time on the
prisoners dilemma. I have the impression that either Ridley has little
respect for his readers intelligence or he had little to say and had
to do some padding. From what Ridley takes a whole book to say, Pinker
covers the key points in a couple of pages.

>But if there are many types, the one humans have is surely the most
>important to us

>From a pragmatic standpoint of getting things done, that is clearly
not true in most cases. For instance, when faced with the problem of
writing a speach recognition program, you and your peers decided to
follow the pragmatic approach and do something different to what the
brain does.

>Turing's Test also operationally defines it that
>way, in the end, and I'm not alone in considering Turing's paper
>THE start and foundation of AI.

Often, the founders of a field have certain ideas in mind which don't
pan out in the long term. When Kanner discovered autism, for instance,
he blamed 'refridgerator' mothers - but that belief is simply false.

Turing was no more omniscient than Kanner. Of course his contribution
to many fields in computing was beyond measure, but that doesn't mean
that AI shouldn't evolve beyond his conception of it.

Evolution is a pragmatist. I see no reason why AI designers shouldn't
also be pragmatists.

If we need a battle of the 'gods', however, then may I refer you to
George Boole who created what he called 'the Laws of Thought'. They
are a lot simpler than passing the Turing Test ;-)

>> Studying humanity is important. But AI is not (or at least should not
>> be) a study of people - if it aims to provide practical results then
>> it is a study of intelligence.
>
>But when we can't agree whether e.g. a termine colony is collectively
>"intelligent" or not, how would it be "AI" to accurately model such a
>colony's behavior?

When did I claim it would be?

>  The only occurrences of "intelligence" which a
>vast majority of people will accept to be worthy of the term are those
>displayed by humans

Of course - we have yet to find another intelligence at this point
that even registers on the same scale as human intelligence. But that
does not mean that such an intelligence cannot exist.

> -- because then "model extroflecting", such an
>appreciated mechanism, works fairly well; we can model the other
>person's behavior by "putting ourselves in his/her place" and feel
>its "intelligence" or otherwise indirectly that way.

Speaking as the frequent victim of a breakdown in that (my broken
non-verbal communication and other social difficulties frequently lead
to people jumping to the wrong conclusion - and persisting in that bad
conclusion, often for years, despite clear evidence to the contrary) I
can tell you that there is very little real intelligence involved in
that process. Of course even many quite profound autistics can "put
themselves in his/her place" and people who supposedly have no empathy
can frequently be seen crying about the suffering of others that
neurotypicals have become desensitised to. But my experience of trying
to explain Asperger syndrome to people (which is quite typical of what
many people with AS have experienced) is pretty much proof positive
that most people are too lazy to think about such things - they'd
rather keep on jumping to intuitive-but-wrong conclusions and they'd
rather carry on victimising people in supposed retaliation for
non-existent transgressions as a consequence.

'Intelligent' does not necessarily imply 'human' (though in practice
it does at this point in history), but certainly 'human' does not
imply 'intelligent'.

>  For non-humans
>it only "works" (so to speak) by antroporphisation, and as the well
>known saying goes, "you shouldn't antropomorphise computers: they
>don't like it one bit when you do".

Of course - but I'm not the one saying that computer intelligence and
human intelligence must be the same thing.

>
>A human -- or anything that can reliably pass as a human -- can surely 
>be said to exhibit intelligence in certain conditions; for anything
>else, you'll get unbounded amount of controversy.  "Artificial life",
>where non-necessarily-intelligent behavior of various lifeforms is
>modeled and simulated, is a separate subject from AI.  I'm not dissing
>the ability to abstract characteristics _from human "intelligent"
>behavior_ to reach a useful operating definition of intelligence that
>is not limited by humanity: I and the AAAI appear to agree that the
>ability to build, adapt, evolve and generally modify _semantic models_
>is a reasonable discriminant to use.

Why should the meaning of the term 'intelligent' be derived from the
meaning of the term 'human' in the first place!

Things never used to be this way. Boole could equate thought with
algebra and no-one batted an eyelid. Only since the human throne of
specialness has been threatened (on the one hand by Darwins assertion
that we are basically bald apes, and on the other by machines doing
tasks that were once considered impossible for anything but human
minds) did terms like 'intelligence', 'thought' and 'consciousness'
start taking on mystic overtones.

Once upon a time, "computer" was a job title. You would have to be
pretty intelligent to work as a computer. But such people were
replaced by pocket calculators.

People have been told for thousands of years that humanity is special,
created in gods image and similar garbage. Elephants would no doubt be
equally convinced of their superiority, if they thought of such
things. After all, no other animal has such a long and flexible nose,
so useful for spraying water around for instance.

Perhaps such arrogant elephants would find the concept of a hose pipe
quite worrying?

I think what is happening with people is similar. People now insist
that consciousness must be beyond understandability, for example, no
because there is any reason why it should be true but simply because
they need some way to differentiate themselves from machines and apes.

>If what you want is to understand intelligence, that's one thing.  But
>if what you want is a program that takes dictation, or ones that plays
>good bridge, then an AI approach -- a semantic model etc -- is not
>necessarily going to be the most productive in the short run (and
>"in the long run we're all dead" anyway:-).

I fully agree. And so does evolution. Which is why 99% or more of what
your brain does involves no semantic model whatsoever.

>  Calling program that use
>completely different approaches "AI" is as sterile as similarly naming,
>e.g., Microsoft Word because it can do spell-checking for you: you can
>then say that ANY program is "AI" and draw the curtains, because the
>term has then become totally useless.  That's clearly not what the AAAI
>may want, and I tend to agree with them on this point.

Then you and they will be very unhappy when they discover just how
'sterile' 99% of the brain is.

>What we most need is a model of _others_ that gives better results
>in social interactions than a lack of such a model would.  If natural
>selection has not wiped out Asperger's syndrome (assuming it has some
>genetic component, which seems to be an accepted theory these days),
>there must be some compensating adaptive advantage to the disadvantages
>it may bring (again, I'm sure you're aware of the theories about that).
>Much as for, e.g., sickle-cell anemia (better malaria resistance), say.

There are theories of compensating advantages, but I tend to doubt
them. This is basically a misunderstanding of what 'genetic' means.

First off, to the extent that autism involves genetics (current
assessments claim autism is around 80% genetic IIRC) those genetics
are certainly not simple. There is no single autism gene. Several
'risk factor' genes have been identified, but all can occur in
non-autistic people and none is common to even more than a
'significant minority' of autistic people.

Most likely, in my view, there are two key ideas to think of in the
context of autism genetics. The first is recessive genes. The second
is what I call a 'bad mix' of genes. I am more convinced by the latter
(partly because I thought it up independantly of others - yes, I know
that's not much of an argument) so I'll describe that in more detail.

I general, you can't just mutate one gene and get a single change in
the resulting organism. Genes interact in complex ways to determine
developmental processes, which in turn determine the end result.

People have recently, in evolutionary terms, evolved for much greater
mental ability. But while a new feature can evolve quite quickly, each
genetic change that contributes to that feature also has a certain
amount of 'fallout'. There are secondary consequences, unwanted
changes, that need to be compensated for - and the cleanup takes much
longer.

Genes are also be continuously swapped around, generation by
generation, by recombination. And particular combinations can have
'unintended' side-effects. There can be incompatibilities between
genes. For evolution to prgress to the point where there are no
incompatibilities (or immunities to the consequences of those
incompatibilities) can take a very long time, especially as each
problem combination may only occur rarely.

Based on this, I would expect autistic symptoms to suddenly appear in
a family line (when the bad mix genes are brought together by a fluke
of recombination). This could often be made worse by the general
principle that birds of a feather flock together, bringing more
incompatible bad mix genes together. But as reproductive success drops
(many autistics never find partners) some of the lines simply die out,
while other lines simply separate out those bad mix genes, so that
while the genes still exist most children no longer have an
incompatible mix.

Basically, the bad mix comes together by fluke, but after a few
generations that bad mix will be gone again.

Alternatively, people with autism and Asperger syndrome seem to
consistently have slightly overlarge heads, and there is considerable
evidence of an excessive growth in brain size at a very young age.
This growth spurt may well disrupt developmental processes in key
parts of the brain. The point being that this suggests to me that
autistic and AS people are basically pushing the limit in brain size.
We are the consequence of pushing too fast for too much more mental
ability. We have the combination of genes for slightly more brain
growth, but the genes to adapt developmental processes to cope with
that growth - but we don't have the genes to fix the unwanted
consequences of these new mixes of genes.

So basically, autism and AS are either the leading or trailing edge of
brain growth evolution - either we are the ones who suffer the
failings of 'prototype' brain designs so that future generations may
evolve larger non-autistic brains, or else we are the ones who suffer
the failings of bad mix 'fallout' while immunity to the bad gene
combinations gradually evolves.

In neither case do we have a particular compensating advantage, though
a few things have worked out relatively well for at least some people
with AS over the last few centuries. Basically, you get the prize
while I suffer for it. Of course I'm not bitter ;-)

>>>But the point remains that we don't have "innate" mental models
>>>of e.g. the way the mind of a dolphin may work, nor any way to
>>>build such models by effectively extroflecting a mental model of
>>>ourselves as we may do for other humans.
>> 
>> Absolutely true. Though it seems to me that people are far to good at
>> empathising with their pets for a claim that human innate mental
>> models are completely distinct from other animals. I figure there is a
>
>Lots of antropomorphisation and not-necessarily-accurate projection
>is obviously going on.

Not necessarily. Most of the empathising I was talking about is pretty
basic. The stress response has a lot in common from one species to
another, for instance. This is about the level that body language
works in AS - we can spot a few extreme and/or stereotyped emotions
such as anger, fear, etc.

Beyond that level, I wouldn't be able to recognise empathising with
pets even if it were happening right in front of me ;-)

>> My guess is that even then, there would be more dependence on
>> sophisticated heuristics than on brute force searching - but I suspect
>> that there is much more brute force searching going on in peoples
>> minds than they are consciously aware of.
>
>I tend to disagree, because it's easy to show that the biases and
>widespread errors with which you can easily catch people are ones
>that would not occur with brute force searching but would with
>heuristics.  As you're familiar with the literature in the field
>more than I am, I may just suggest the names of a few researchers
>who have accumulated plenty of empirical evidence in this field:
>Tversky, Gigerenzer, Krueger, Kahneman... I'm only peripherally
>familiar with their work, but in the whole it seems quite indicative.

I'm not immediately familiar with those names, but before I go look
them up I'll say one thing...

Heuristics are fallible by definition. They can prevent a search
algorithm from searching a certain line (or more likely, prioritise
other lines) when in fact that line is the real best solution.

With human players having learned their heuristics over long
experience, they should have a very different pattern of 'tunnel
vision' in the search to that which a computer has (where the
heuristics are inherently those that could be expressed 'verbally' in
terms of program code or whatever).

In particular, human players should have had more real experience of
having their tunnel vision exploited by other players, and should have
learned more sophisticated heuristics as a result.

I don't believe in pure brute force searching - for any real problem,
that would be an infinite search (and probably not even such a small
infinity as aleph-0). When I say 'brute force' I tend to mean that as
a relative thing - faster searching, less sophisticated heuristics. I
suspect that may not have been clear above.

But anyway, the point is that heuristics are rarely much good at
solving real problems unless there is some kind of search or closure
algorithm or whatever added.

I do remember reading that recognition of rotated shapes shows clear
signs that a search process is going on unconsciously in the mind.
This isn't conscious rotation (the times were IIRC in milliseconds)
but the greater the number of degrees of rotation of the shape, the
longer it takes to recognise - suggesting that subconsciously, the
shape is rotated until it matches the required template.

So searches do seem to happen in the mind. Though you are quite right
to blame heuristics for a lot of the dodgy results. And while I doubt
that 'search loops' in the brain run through thousands of iterations
per second, with good heuristics maybe even a one iteration per second
(or even less) could be sufficient.

The real problem for someone with AS is that so much has to be handled
by the single-tasking conscious mind. The unconscious mind is, of
course, able to handle a number of tasks at once. If only I could
listen to someones words and figure out their tone of voice and pay
attention to their facial expression at the same time I'd be a very
happy man. After all, I can walk and talk at the same time, so why no
all this other stuff too :-(

>It IS interesting how often an effective way to understand how
>something works is to examine cases where it stops working or
>misfires -- "how it BREAKS" can teach us more about "how it WORKS"
>than studying it under normal operating conditions would.  Much
>like our unit tests should particularly ensure they test all the
>boundary conditions of operation...;-).

That is, I believe, one reason why some people are so keen to study
autism and AS. Not so much to help the victims as to find out more
about how social ability works in people who don't have these
problems.


-- 
Steve Horne

steve at ninereeds dot fsnet dot co dot uk




More information about the Python-list mailing list