|
|
 |
 |
|
Machines Who Think: 25th anniversary
edition
Pamela
McCorduck
Natick,
MA: A K Peters, Ltd., 2004 |
 |
|
|
 |
 |
|
|
 |
 |
|
|
|
Recent
commentary about the 25th anniversary edition of Machines Who
Think |
|
"Over
the course of the last half-century, a number of books have
sought to explain AI to a larger audience and many more
devoted to writing the formal history of AI. It is a tribute
to her powers of observation and her conversational style that
none has really proven more successful than Pamela
McCorduck's Machines Who Think, now approaching the quarter
century mark. Currently, it is the first source cited on the
AI Topics web site on the history of AI. Based on extensive
interviews with many of the early key players, it managed to
forge the template for most subsequent histories, in the sense
of providing them both the time line and the larger frame
tale."
–AI Magazine,
Winter 2004 |
|
"In
summary, if you are interested in the story of how the
pioneers of AI approached the problem of getting a machine to
think like a human, a story told with verve, wit, intelligence
and perception, there is no better place to go than this
book."
–Nature, February 19,
2004 |
|
"The enormous, if stealthy, influence
of AI bears out many of the wonders foretold 25 years ago in
Machines Who Think, Pamela McCorduck's groundbreaking
survey of the history and prospects of the field…. [T]aken
together, the original and the afterword form a rich and
fascinating history."
–Scientific American, May
2004 | |
 |
 |
|
A
25-year-old book about science has some explaining to do.
Machines Who Think was conceived as a history of artificial
intelligence, beginning with the first dreams of the classical Greek
poets (and the nightmares of the Hebrew prophets), up through its
realization as twentieth-century science.
The interviews
with AI's pioneer scientists took place when the field was young and
generally unknown. They were nearly all in robust middle age, with a
few decades of fertile research behind them, and luckily, more to
come. Thus their explanations of what they thought they were doing
were spontaneous, provisional, and often full of glorious fun. Tapes
and transcriptions of these interviews, along with supporting
material and working drafts of the manuscript, can be found in the
Pamela McCorduck Collection at the Carnegie Mellon University Library Archives. If
you believe (and I do) that artificial intelligence is not only one
of the most audacious of scientific undertakings, but also one of
the most important, these interviews are a significant record of the
original visionaries, whose intellectual verve set the tone for the
field for years to come. That verve–that arrogance, some people
thought–also set teeth on edge, as I've pointed out.
Practicing scientists, more interested in what will happen
than what once did, are apt to forget their field's history. The new
is infinitely seductive. But an interesting characteristic of AI
research is how often good ideas were proposed, tried, then dropped,
as the technology of the moment failed to allow a good idea to
flourish. Then technology would catch up, and a whole new set of
possibilities would emerge; another generation would rediscover a
good idea, and the dance would begin once more. Meanwhile, new ideas
have come up alongside the old: far from "the failure" its critics
love to claim, the field thrives and already permeates everyday
life.
But above all, the history of AI is a splendid tale in
its own right–the search for intelligence outside the human cranium.
It entailed defining just what "intellience" might be (disputed
territory even yet) and which Other, among several candidates, might
exhibit it. The field called up serious ethical and moral questions,
and still does. It all happens to be one of the best tales of our
times.
From the new foreword:
"Machines
Who Think has its own modest history that may be worth telling.
In the early summer of 1974, John McCarthy made an emergency landing
in his small plane in Alaska, at a place called (roughly translated)
the Pass of Much Caribou Dung, so remote a spot he could not radio
for help."
From
the 30,000-word afterword, that summarizes the field since the
original was published:
"In the late 1970s and early
1980s, artificial intelligence moved from the fringes to become a
celebrity science. Seen in the downtown clubs, boldface in the
gossip columns, stalked by paparazzi, it was swept up in a notorious
publicity and commercial frenzy."
The
new edition also has two separate time-lines, one tracing the
evolution of AI in its narrowest sense, and a second one taking a
much broader view of intellectual history, and placing AI in the
context of all human information gathering, organizing, propagation,
and discovery, a central place for AI that has only become apparent
with the development of the second generation World Wide Web, which
will depend deeply on AI techniques for finding, shaping and
inventing knowledge.
Herb Simon himself urged me to
re-publish. "Pamela," he wrote in email a few months before he died.
"Do consider what might be done about bringing Machines Who
Think back into print. More machines are thinking every day, and
I would expect that every one of them would want to buy a copy.
Soccer robots alone should account for a first
printing." |
 |
 |
|
FAQ
answered by Pamela McCorduck:
Q: How long has the
human race dreamed about thinking machines?
A:
Since at least the time of classical Greece, when Homer's
Iliad tells us about robots that are made by the Greek god
Hephaestos. Some of them are human-like, and some of them are
just machines–for example, golden tripods that serve food and wine
at banquets. At about the same time, the Chinese were also
telling tales of human-like machines that could think.
It's
also important to remember that this is the time in human history
when the Second Commandment was codified, prohibiting the making of
graven images. In my book, I describe each attitude: I call one
the Hellenic point of view, meaning out of Greece, and generally
welcoming the idea of thinking machines. The other I call the
Hebraic, which finds the whole idea of thinking machines wicked,
even blasphemous. These two attitudes are very much alive
today.
The history of thinking machines is extremely rich:
every century has its version. The 19th century was particularly
fertile: Frankenstein and the Tales of E. T.
A. Hoffman were published, and the fake chess machine called "The
Turk" was on exhibit.
Q: What's the difference between all
those tall tales and what you're writing about?
A: They
were exactly that–tall tales. However, by the middle of the
20th
century, a small group of farsighted scientists
understood that the computer would allow them to actually realize
this longstanding dream of a thinking machine. Q: What does it mean
that a machine beat Garry Kasparov, the world's chess
champion?
A: It's a tremendous achievement for human
scientists to design a machine smart enough to beat not only the
reigning chess champion, but also a man said to be the best chess
champion ever. Kasparov, for his part, claims that these
programs are making him a better chess player.
Q: Does
this mean that machines are smarter than we are?
A:
Machines have been "smarter" than us in many ways for a
while. Chess is the best-known achievement, but many
artificially intelligent programs have been at work for at least two
decades in finance, in many sciences, such as molecular biology and
high-energy physics, and in manufacturing and business processes all
over the world. If you include arithmetic, machines have been
"smarter" than us for more than a century. People no longer
feel threatened by machines that can add, subtract, and remember
faster and better than we can, but machines that can manipulate and
even interpret symbols better than we can give us pause.
Q:
Those are very narrow domains. Do general-purpose
intelligent machines as smart as humans exist?
A: Not
yet. But scientists are trying to figure out how to design a
machine that exhibits general intelligence, even if that means
sacrificing a bit of specialized intelligence.
Q: If the
human chess champion has finally been defeated, what's the next big
goal?
A: It took fifty years between the time scientists
first proposed the goal of a machine that could be the world's chess
champion, and when that goal was reached. In the late 1990s, a
major new goal was set. In fifty years, AI should field a robot
team of soccer players to compete with and defeat the human team of
champions at the World's Cup. In the interim, more modestly
accomplished soccer robots are teaching scientists a great deal
about physical coordination in the real world, pattern recognition,
teamwork, and real-time tactics and strategy under
stress. Scientists from all over the world are fielding teams
right now–one of the most obvious signs of how international
artificial intelligence research has become.
Q: Artificial
intelligence–is it real?
A: It's real. For more than
two decades, your credit card company has employed various kinds of
artificial intelligence programs to tell whether or not the
transaction coming in from your card is typical for you, or whether
it's outside your usual pattern. Outside the pattern, a warning
flag goes up. The transaction might even be rejected. This
isn't usually an easy, automatic judgment–many factors are weighed
as the program is deciding. In fact, finance might be one of
the biggest present-day users of AI. Utility companies employ
AI programs to figure out whether small problems have the potential
to be big ones, and if so, how to fix the small problem. Many
medical devices now employ AI to diagnose and manage the course of
therapy. Construction companies use AI to figure out schedules
and manage risks. The U.S. armed forces uses all sorts of AI
programs–to manage battles, to detect real threats out of possible
noise, and so on. Though these programs are usually smarter
than humans could be, they aren't perfect. Sometimes, like
humans, they fail.
Q: What so-called smart computers do–is
that really thinking?
A: No, if you insist that thinking
can only take place inside the human cranium. But yes, if you
believe that making difficult judgments, the kind usually left to
experts, choosing among plausible alternatives, and acting on those
choices, is thinking. That's what artificial intelligences do
right now.
Along with most people in AI, I consider what
artificial intelligences do a form of thinking, though I agree that
these programs don't think just like human beings do, for the most
part. I'm not sure that's even desirable. Why would we
want artificial intelligences if all we wanted were human-level
intelligence? There are plenty of humans on the
planet. AI's big project is to make intelligences that exceed
our own.
Q: But doesn't that mean our own machines will
replace us?
A: This continues to be debated both inside
and outside the field. Some people fear this–that a smart
machine will eventually get smart enough to come in and occupy our
ecological niche, and that will be that. So long, human
race.
Some people think that the likeliest scenario is that
smart machines will help humans become smarter, the way Garry
Kasparov feels that smart chess-playing machines have made him a
better player.
Some people think that smart machines won't
have any desire to occupy our particular niche: instead, being
smarter than we are, they'll lift the burden of managing the planet
off our shoulders, and leave us to do the things we do best–a rather
pleasant prospect.
But recently, Bill Joy, an eminent
computer scientist who helped found Sun Microsystems, was worried
enough to write an article that calls for a halt in AI and some
other kinds of research. He's far from the first, by the
way. Most of the arguments against halting suggest that the
benefits will outweigh the dangers. But nobody believes that
there's no chance of danger.
Q: Aren't you yourself
worried?
A: I agree that the dangerous scenarios are
entirely plausible. I explore that further in my book. But
I also believe that the chance is worth taking. The benefits
could be tremendous.
Let's take some
examples. Scientists are at work right now on robots that will
help the elderly stay independently in their homes a bit longer than
otherwise. I think that's terrific. At the 2003 Superbowl
(and presumably at the 2004 Superbowl too) a kind of artificial
intelligence called "smart dust"–smart
sensors a millimeter by a millimeter–was
deployed to sense and report on unusual activity, looking for
terrorists. Scientists are also at work on a machine that can
detect the difference between a natural disease outbreak and a
bio-terror attack. Unfortunately, these are issues we must
address for the foreseeable future. We've recently had a lot of
bad news about cheating going on in the financial sector. At
least one part of that sector, the National Association of Security
Dealers, uses AI to monitor the activities of its traders, looking
not only at the trading patterns of individual traders, but at
articles in newspapers and other possible influences.
Q:
Whoa! Isn't that a big invasion of privacy? In fact,
didn't we hear that AI was going to be used for the government's
Total Information Awareness project? That makes me very
uncomfortable.
A: Americans cherish their privacy, and so
they should. American ideas about privacy have evolved legally
and socially over a long period. Moreover, Americans aren't the
only ones with such concerns–the European Union is even stricter
about the use of personal information than the U.S. But the
European Union also understands that the best defense against
terrorism is to be able to detect patterns of behavior that might
alert law enforcement officers to potential terrorism before it
happens. Like the privacy you give up for the convenience of
using a credit card, it's a trade-off. I think that trade-off
should be publicly debated, with all the gravity it
deserves.
Q: Shouldn't we just say no to intelligent
machines? Aren't the risks too scary?
A: The risks
are scary; the risks are real; but I don't think we should say
no. In my book, I go further. I don't think we can say
no.
Here's what I mean: one of the best things humans
have ever done for themselves was to collect, organize, and
distribute information in the form of libraries and
encyclopedias. We have always honored that effort, because we
understand that no human can carry everything worth knowing inside a
single head. The World Wide Web is this generation's new giant
encyclopedia, and the Semantic Web, which is the next generation
Web, will have intelligence built in. It will be as if
everybody with access to a computer can have the world's smartest
reference librarian at their fingertips, ready to help find exactly
what you need, no matter how ill-formed your question
is. And it will be able to offer some assurance that the
information you are getting is reliable–the present World Wide Web
cannot do that.
In other words, intelligent machines seem to
be part of a long human impulse to educate ourselves better and
better, to make life better for each of us.
Q: What's
ahead as AI succeeds even more?
A: Many of us already
deal with limited AI in our daily lives–credit cards, search engines
like Google, automated voice instructions from our GPS devices to
help us drive to our destinations; we order prescriptions over the
phone from semi-intelligent voice machines. But visionary
projects are underway to make–hey, read my book!
Q: Would
you consider yourself an AI optimist?
A: On the whole,
yes, though I'm not nearly as certain that AI will pervade on the
time scale that some observers, such as Ray Kurzweil,
believe. Significant AI will come to us, but not in a
rush. My book talks about my experience with intelligent robots
at a meeting in the summer of 2003. Some people have said I was
too critical, too negative about that. But in March 2004, DARPA
staged a race for autonomous vehicles over a 30-mile desert
course. The best vehicle (Carnegie-Mellon's entry) did just
over 7 miles before it quit. Some vehicles didn't even get
started. We have a way to go, and no wonder. In just a few
decades, we're trying to mimic and even surpass millions of years of
natural evolution.
[return
to top] |
 |
 |
|
Copyright © 2005 - 2010 by Pamela
McCorduck. All rights reserved. Modified: March 09,
2010 | |