Python from Wise Guy's Viewpoint
Pascal Bourguignon
spam at thalassa.informatimago.com
Mon Oct 20 23:56:11 EDT 2003
"Andrew Dalke" <adalke at mindspring.com> writes:
> Pascal Bourguignon:
> > We all agree that it would be better to have a perfect world and
> > perfect, bug-free, software. But since that's not the case, I'm
> > saying that instead of having software that behaves like simple unix C
> > tools, where as soon as there is an unexpected situation, it calls
> > perror() and exit(), it would be better to have smarter software that
> > can try and handle UNEXPECTED error situations, including its own
> > bugs. I would feel safer in an AI rocket.
>
> Since it was written in Ada and not C, and since it properly raised
> an exception at that point (as originally designed), which wasn't
> caught at a recoverable point, ending up in the default "better blow
> up than kill people" handler ... what would your AI rocket have
> done with that exception? How does it decide that an UNEXPECTED
> error situation can be recovered?
By having a view at the big picture!
The blow up action would be activated only when the big picture shows
that the AI has no control of the rocket and that it is going down.
> How would you implement it?
Like any AI.
> How would you test it? (Note that the above software wasn't
> tested under realistic conditions; I assume in part because of cost.)
In a simulator. In any case, the point is to have a software that is
able to handle even unexpected failures.
> I agree it would be better to have software which can do that.
> I have no good idea of how that's done. (And bear in mind that
> my XEmacs session dies about once a year, eg, once when NFS
> was acting flaky underneath it and a couple times because it
> couldn't handle something X threw at it. ;)
XEmacs is not AI.
> The best examples of resilent architectures I've seen come from
> genetic algorithms and other sorts of feedback training; eg,
> subsumptive architectures for robotics and evolvable hardware.
> There was a great article in CACM on programming an FPGA
> via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
> I recall) but pointed out the hard part about this approach is
> that it's hard to understand, and the result used various defects
> on the chip (part of the circuit wasn't used but the chip wouldn't
> work without it) which makes the result harder to mass produce.
>
> Andrew
> dalke at dalkescientific.com
In any case, you're right, the main problem may be that it was
specified to blow up when an unhandled exception was raised...
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war? Trust US presidents :-(
More information about the Python-list
mailing list