John Searle, Professor of Philosophy at Berkeley, is best known for his famous "Chinese Room" Analogy. The analogy goes like this: Dr. Searle is in a large room with two holes marked I (Input) and O (Output). From the 'I' box, he gets handed questions written in Chinese kanji. Also in his room is a huge book with English instructions as to how to look up the answers and write them on a piece of paper to the Chinese questions - therefore, practicalities aside, he could look up any question and give the right answer. Searle says this is analogous to computers running NLP programs Ejust because they input the correct answer given an input, no matter how complicated the algorithm, it does not constitute understanding. The analogy has been a huge area of debate for the twenty years prior to Dr.Searle first publishing his paper on it. Generation5 are very proud to interview him. 1.) Your 'Chinese Room' analogy is probably the singly most talked-about subject in the philosophical side of Artificial Intelligence. Did you ever think it would have such an impact? I knew when I originally formulated the Chinese Room Argument that it was decisive against what I call "Strong Artificial Intelligence", the theory that says the right computer program in any implementation whatever, would necessarily have mental contents in exactly the same sense that you and I have mental contents. The Chinese Room Argument refutes the view that the implemented computer program, regardless of the physics of the implementing medium, is sufficient, by itself to guarantee mental contents. I did not think it would receive the amount of attention it did. What I expected, in so far as I had any expectation at all, is that the people who could appreciate its force would simply accept it, and the people who for one reason or another did not want to face the issue would simply avoid it. What I did not anticipate is that there would be twenty years of continuing debate. 2.) How would you define understanding? Do you differentiate between
the idea of intelligence and that of understanding? (Could a computer be
intelligent, but not *understand* what you were saying?) "Understanding" is normally contrasted with "misunderstanding", but for
the purposes of my argument this distinction is not important. I don't
"understand" Chinese, but then I don't "misunderstand" it either, because
I do not have any Chinese intentional content at all. The point of the
argument is simply that the syntax of the implemented program is not
sufficient to guarantee the presence of the semantics, or mental content,
of intrinsic intentionality. Where English is concerned I have meaning
attached to the words, that is, I have intentional content attaching to
the words. Where Chinese is concerned in the Chinese room, I am just
manipulating formal symbols.
When it comes to the notion of "intelligence" there is a massive
confusion in the AI literature. "Intelligence" has two different senses
(at least). In one sense there is an intelligence that is psychologically
relevant, that is the intelligence that, for example, humans and animals
have. My dog is literally more intelligent than a mouse, because he has a
bigger brain and has greater psychological capacity. But there is also a
metaphorical, or observer-relative, or derivative sense of intelligence,
where we simply apply the concept of intelligence to things that have no
mental life at all. Thus I can say that my pocket calculators today are
more "intelligent" than my calculators of twenty years ago, and my "smart
modem" of today is much smarter that the modem I had twenty years ago.
This is a harmless use of the term intelligence. The mistake is to suppose
that somehow or other the existence of the behavioral or observer-relative
phenomenon of intelligence is somehow psychologically relevant, that
"intelligent" behavior guarantees actual psychological content, and of
course it doesn't.
By the way, the very expression "artificial intelligence" also trades
on these ambiguities. "Artificial" also has different senses. An
artificial x can be a real x produced artificially, or it can be something
that is not a real x at all. Thus artificial dyes, now commonly used in
oriental rugs, are real dyes, they just happen to be produced
artificially. But artificial leather is not real leather produced
artificially, rather it is not leather at all, but merely a plastic
imitation. The expression "artificial intelligence" trades on the
ambiguity of both "artificial" and "intelligence". The confusion is to
suppose that because we can artificially produce something that behaves as
if it is intelligent, then somehow or other we have artificially produced
real intelligence in the psychologically relevant sense. One thing the
Chinese Room shows is that this is a fallacy. 3.) If a computer manipulating symbols via a specified algorithm is not
understanding, how do you believe humans process information? Do you
believe this to be non-computable? Basically I think the brain is important. We might be able to do what
the brain does using some other medium, but we would have to duplicate the
specific causal powers of the brain, and not just do a simulation or a
modeling of brain processes, you actually have to duplicate the causal
powers. An analogy will make this point clear:.You do not have to use
feathers in order build a machine that can fly, but you do have to
duplicate the bird's causal power to overcome the force of gravity in the
earth's atmosphere, and a computer simulation of flight is not a flight.
The notion of what is computable and what is not computable is a
mathematical notion having to do with what sorts of problems you can solve
with algorithms, and I do not see any difficulty posed for artificial
intelligence by the limits of computability. The problem is not
computability, the problem is psychological reality. I realize that some
people have made a great deal out of the fact that there are
non-computable problems that humans can solve, but this argument seems to
me irrelevant to AI because humans may be using algorithms to solve those
problems even though the algorithms are not, for example, theorem-proving
algorithms.
It is important to keep emphasizing that of course, in a sense, we are
robots. We are all physical systems capable of behaving in certain ways.
The point, however, is that unlike the standard robots of science fiction,
we are actually conscious. We actually have conscious and unconscious
forms of mental life. The robots we are imagining, I take it, have no
consciousness whatever. They are simply computer simulations of the
behavior patterns of human beings.
|