 |
Artificial Intelligence
- Intelligent Enough?
by Ankit Singh B.Tech (7th
Semester) IIIT-Allahabad
FUTURISM
AND TRANSHUMANISM
by Mr. Gaurav Gupta Board of Directors, World
Transhumanist Association; Technology Adviser, Acceleration
Studies Foundation
|


|

|
Artificial Intelligence -
Intelligent Enough?
Ankit Singh B.Tech Student,
IIIT-Allahabad | |
|
Perhaps Allan Turing will be called
the father of AI whose contribution is tremendous in many fields of
computer science. Proudly known for his trademark definition of Turing
Machines, Turing in 1950 devised a test known as the Turing's Test as a
method to effectively conclude that an artificial life is truly
intelligent. Those were the times when robots, droids, cybots, cyborgs
were quite popular in sci-fi and many people including him believed that
the Turing Test will be broken before the end of the 20th century. But it
didn't happen. In fact Turing Test is a feat computer scientists across
the globe believe that will take many more decades to accomplish (many
hold the belief that it will never be broken). Actually this test
measures only one aspect of AI, considered the first step in machine
intelligence, Natural Language Processing (NLP). NLP includes those
algorithms and tools which make a machine interact in human or natural
languages as human beings do. For example you ask a machine or a program
to describe yourself in one sentence, this is NLP. The complexity of NLP
lies in the conversion of grammar rules to machine processing and vice
versa, which is a difficult problem given the ambiguity of grammars of
languages existing in the world. Many algorithms are being used for NLP,
from Neural Networks to Neuro-Fuzzy logic but still the answers received
from machines seem to be dumb. To conduct the Turing Test, we need two
people and the machine to be evaluated. One person plays the role of the
interrogator, who is in a separate room from the computer (or on network)
and the other person. The interrogator can ask questions and receive typed
responses. However the interrogator knows them only as A and B and tries
to determine which is the person and which is the computer. The goal of
the computer is to make the interrogator or fool the interrogator into
believing that it is the person. If the machine succeeds then this will
coincide with the fact that computer can think and act like a human being
(at least linguistically).
So for example if the computer is asked the question what
is 122486 times 19 then it can wait for sometime and return a wrong
answer, perfectly valid since a human in these circumstances would be
expected not to act like a calculator. The more serious issue, however, is
the amount of knowledge the machine would need to pass the Turing Test.
Turing gave an example of this by asking the computer a tough question
about a famous poem. If the computer can answer these kinds of questions
then it means that it has as much knowledge as a human being.
In 1966 Joseph Weizenbaum, a scientist of MIT, wrote a
program Eliza (named after the central character of the famous movie My
Fair Lady, who spoke in a foolish way) in a Lisp-like language called
MAD-Slip, way back in pre-Unix days. Eliza is called the first AI and
simultaneously the first NLP program, a chat bot (actually it was more a
psychoanalyst). It is pretty easy to fool Eliza; in fact you only need a
few sentences chatting with it to conclude that it is artificial. It used
string matching techniques to generate responses.
But suppose we
are willing to settle for less than a complete imitation of a person.
Taking this fact in view a famous prize (as revered in the AI community as
the X-prize of Aerospace technology) called the Loebner Prize has been
conducted.
In 1990 Hugh Loebner agreed with The Cambridge Center
for Behavioral Studies to underwrite a contest designed to implement the
Turing Test. Dr. Loebner pledged a Grand Prize of $100,000 and a Gold
Medal for the first computer whose responses were indistinguishable from a
human's. Each year an annual prize of $2000 and a bronze medal is awarded
to the most human computer. The winner of the annual contest is the best
entry relative to other entries that year, irrespective of how good it is
in an absolute sense.
Since then Alice the famous bot now, has
been winning for many years this revered prize. Alice follows the strategy
of keyword matching but enhancing it much more. Suppose we separate the
Inference Engine and Knowledge Base of an NLP program like an Expert
System (another AI domain). Alice does exactly that by keeping the
knowledge base and Inference Engine (the think tank of AI) separate. The
Knowledge Base is encoded in a markup language called Artificial
Intelligence Markup Language (AIML). The Inference Engine is coded in
java. Now there is a plethora of bots, mostly chat bots on internet
competing for Loebner Prize but yet very far from beating the actual
Turing's test.
An important part of NLP or any AI program is its
learning capability. Although Eliza didn't have any learning inherent in
it, the latest bots do have. Learning in AI is of different types and use
different algorithms and heuristics. The simplest and easy to code is the
Rote Learning which is nothing but caching the program parameters in main
memory. Interestingly this increases the efficiency as well as
effectiveness to nearly 80%. Other learning methods include Learning by
taking Advice, Learning by Problem Solving, Learning by Analogy, Learning
by Examples and the most interesting one -Learning by Discovery. The
application of any of these methods to NLP programs or bots requires a lot
of work, for example computer scientists have yet not been able to
implement Learning by Discovery in bots.
Learning and growing the
way a human child does seems a good analogy to be implemented in AI but
its not that simple. Curiosity is what drives the human mind, curiosity
which makes him question. Heuristics that make a program thirsty for
knowledge ensure his Knowledge Base expansion very easily. Feeding simple
text files to NLP programs to add to their KB (Knowledge Base) has been
accomplished. So you could feed an electronic encyclopedia to a program
and test its intellect over that. Or Internet Relay Chat (IRC) can be used
to connect the program online to messengers and chat rooms so that it
interacts with users online from them (great risk involved in this,
seriously). This goes suffice to say as far as knowledge is concerned.
But what about analysis, judgment and reasoning? Unfortunately this is
where bots fail to perform. Reasoning and analysis is better done by
Expert Systems confined to a narrow domain and using extensively vast
amount of knowledge in that field. Then are they able to say that these
symptoms suggest this viral infection. Language is vaster than any
scientific domain because it is in common day to day use and being
enhanced with every passing moment. Slangs which were prevalent in 18th
century maybe unheard of in 21st century. Moreover language is ambiguous,
that's why some scientists suggest the use of better natural language
grammars for machine interactions, say Sanskrit. But truly speaking how
many people in the world speak Sanskrit? The parts of the inference
engine can be KB language specific or the way the author codes but
basically just as a developer would develop a compiler for a language is
the way a NLP program is built. The difference is that in a compiler the
input language is a computer language and the output language is also a
computer language (mostly assembly code). But in case of a natural
language processor the input is a natural language and output is pragmatic
analysis of what the sentence means. Therein lies the complexity of the
problem because natural language grammars are not so hard and fast as
computer grammars, so how to precisely analyze a sentence? A problem
of fooling around with a computer is an interesting one. Suppose that a
user instead of seriously chatting with the programs tries to fool it and
provides it misinformation or rumors. How to know what the mood of the
user is through his input? Moreover how to respond when he is trying to
say that he once went to sun. A human being can understand this very
easily but for a computer to understand this is very difficult.
AI is a vast field and NLP is not the only intriguing
domain in it. AI includes robotics (the more enhanced one is called
cybernetics) and searching. Whereas searching is much simpler than the
earlier two, Robotics is a field which has still to see the creation of an
artist by technology.
Say we talk of a droid and with droid I mean robots like
the one concealed by pentagon (which can supposedly swim, talk, and even
shoot, is it myth or reality?) and not the one's like Asimo being
displayed year after year by companies like Honda. What should this
Artificial Life form possess to be called intelligent? First of all if it
can talk it has NLP. Then comes speech processing (which is in its nascent
stages of development) and recognition, for computer vision it has to have
static and dynamic visualization and 3D (or at least 2D) recognition of
objects and sense of touch (force or heat sensor can do this) leaving
apart the cybernetic and electro-mechanical components of the droid. Sense
of smell and taste have not yet been started to be created
artificially.
So making of this droid involves so many difficult and
complex tasks which are still being understood, leave apart the feat shown
in Spielberg's AI. Well that even deals with Ontology (the part of AI that
deals with computer consciousness), conscience (the ultimate intellect)
and artificial tissues.
By taking into account the progress made by AI since its
inception we can ask questions like - has science lived up to science
fiction? Well it is always easier said than done but still when does the
reader think the Turing Test will be passed, if ever. And is Artificial
Intelligence intelligent enough by 2005 or is it just dumb machines doing
specialized jobs?
|