Read Machines of Loving Grace Online

Authors: John Markoff

Machines of Loving Grace (4 page)

BOOK: Machines of Loving Grace
5.62Mb size Format: txt, pdf, ePub
ads

But the judges for the original Loebner contest in 1991 fell into two broad categories: computer literate and computer illiterate.
For human judges without computer expertise, it turned out that for all practical purposes the Turing test was conquered in that first year.
In reporting on the contest I quoted one of the nontechnical judges, a part-time auto mechanic, saying why she was fooled: “It typed something that I thought was trite, and when I responded it interacted with me in a very convincing fashion,”
5
she said.
It was a harbinger of things to come.
We now routinely interact with machines simulating humans and they will continue to improve in convincing us of their faux humanity.

Today, programs like Siri not only seem almost human; they are beginning to make human-machine interactions in natural language seem routine.
The evolution of these software robots is aided by the fact that humans appear to want to believe they are interacting with humans even when they are conversing with machines.
We are hardwired for social interaction.
Whether or not robots move around to assist us in the physical world, they are already moving among us in cyberspace.
It’s now inevitable that these software bots—AIs, if only of limited capability—will increasingly become a routine part of daily life.

Intelligent software agents such as Apple’s Siri, Microsoft’s Cortana, and Google Now are interacting with hundreds of millions of people, by default defining this robot/human relationship.
Even at this relatively early stage Siri has a distinctly human style, a first step toward the creation of a generation of likable and trusted advisors.
Will it matter whether we interact with these systems as partners or keep them as slaves?
While there is an increasingly lively discussion about whether intelligent agents and robots will be autonomous—and if they are autonomous, whether they will be self-aware enough that we need to consider questions of “robot rights”—in the short term the more significant question is how we treat these systems and what the design of those interactions says about what it means to be human.
To the extent that we treat these systems as partners it will humanize us.
Yet the question of what the relationship between humans and machines will be has largely been ignored by much of the modern computing world.

Jonathan Grudin, a computer scientist at Microsoft Research, has noted that the separate disciplines of artificial intelligence and human-computer interaction rarely speak to one another.
6
He points to John McCarthy’s early explanation of the direction of artificial intelligence research: “[The goal] was to get away from studying human behavior and consider the computer as a tool for solving certain classes of problems.
Thus AI was created as a branch of computer science and not as a branch of psychology.”
7
McCarthy’s pragmatic approach can certainly be justified by the success the field has had in the past half decade.
Artificial intelligence researchers like to point out that aircraft can fly just fine without resorting to flapping their wings—an argument that asserts that to duplicate human cognition or behavior, it is not necessary to comprehend it.
However, the chasm between AI and IA has only deepened as AI systems have become increasingly facile at human tasks, whether it is seeing, speaking, moving boxes, or playing chess,
Jeopardy!
, or Atari video games.

Terry Winograd was one of the first to see the two extremes
clearly and to consider the consequences.
His career traces an arc from artificial intelligence to intelligence augmentation.
As a graduate student at MIT in the 1960s, he focused on understanding human language in order to build a software equivalent to Shakey—a software robot capable of interacting with humans in conversation.
Then, during the 1980s, in part because of his changing views on the limits of artificial intelligence, he left the field—a shift in perspective moving from AI to IA.
Winograd walked away from AI in part because of a series of challenging conversations with a group of philosophers at the University of California.
A member of a small group of AI researchers, he engaged in a series of weekly seminars with Berkeley philosophers Hubert Dreyfus and John Searle.
The philosophers convinced him that there were real limits to the capabilities of intelligent machines.
Winograd’s conversion coincided with the collapse of a nascent artificial intelligence industry known as the “AI Winter.”
Several decades later, Winograd, who was faculty advisor for Google cofounder Larry Page at Stanford, famously counseled the young graduate student to focus on the problem of Web search rather than self-driving cars.

In the intervening decades Winograd had become acutely aware of the importance of the designer’s point of view.
The separation of the fields of AI and human-computer interaction, or HCI, is partly a question of approach, but it’s also an ethical stance about designing humans either into or out of the systems we create.
More recently at Stanford Winograd helped create an academic program focusing on “Liberation Technologies,” which studies the construction of computerized systems based on human-centered values.

Throughout human history, technology has displaced human labor.
Locomotives and tractors, however, didn’t make human-level decisions.
Increasingly, “thinking machines” will.
It is also clear that technology and humanity coevolve, which again will pose the question of who will be in control.
In Silicon Valley it has become fashionable to celebrate the rise of the
machines, most clearly in the emergence of organizations like the Singularity Institute and in books like Kevin Kelly’s 2010
What Technology Wants
.
In an earlier book in 1994,
Out of Control,
Kelly came down firmly on the side of the machines.
He described a meeting between AI pioneer Marvin Minsky and Doug Engelbart:

When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

Minsky: We’re going to make machines intelligent.
We are going to make them conscious!

Engelbart: You’re going to do all that for the machines?
What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered.
But I’m squarely on Minsky’s side—on the side of the made.
People will survive.
We’ll train our machines to serve us.
But what are we going to do for the machines?
8

Kelly is correct to point out that there are Minsky and Engelbart “sides.”
But to say that people will “survive” belittles the consequences.
He is basically echoing Minsky, who is famously said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, maybe they’ll keep us as pets.”

Minsky’s position is symptomatic of the chasm between the AI and IA camps.
The artificial intelligence community has until now largely chosen to ignore the consequences of the systems it considers merely powerful tools, dispensing with discussions of morality.
As one of the engineers who is building next-generation robots told me when I asked about the impact of automation on people: “You can’t think about that; you just have to decide that you are going to do the best you can to improve the world for humanity as a whole.”

During the past half century, McCarthy’s and Engelbart’s
philosophies have remained separate and their central conflict stands unresolved.
One approach supplants humans with an increasingly powerful blend of computer hardware and software.
The other extends our reach intellectually, economically, and socially using the same ingredients.
While the chasm between these approaches has been little remarked, the explosion of this new wave of technology, which now influences every aspect of modern life, will encapsulate the repercussions of this divide.

Will machines supplant human workers or augment them?
On one level, they will do both.
But once again, that is the wrong question to ask, and it provides only a partial answer.
Both software and hardware robots are flexible enough that they can ultimately become whatever we program them to be.
In our current economy, how robots—both machines and intelligent systems—are designed and how they are used is overwhelmingly defined by cost and benefit, and costs are falling at an increasingly rapid rate.
In our society, economics dictate that if a task can be done more cheaply by machine—software or hardware—in most cases it will be.
It’s just a matter of when.

The decision to come down on either side of the debates is doubly difficult because there are no obvious right answers.
Although driverless cars will displace millions of jobs, they will also save many lives.
Today, decisions about implementing technologies are made largely on the basis of profitability and efficiency, but there is an obvious need for a new moral calculus.
The devil, however, is in more than the details.
As with nuclear weapons and nuclear power, artificial intelligence, genetic engineering, and robotics will have society-wide consequences, both intended and unintended, in the next decade.

2
   
|
   
A CRASH IN THE DESERT

O
n a desert road near Florence, Arizona, one morning in the fall of 2005, a Volkswagen Touareg was kicking up a dust cloud, bouncing along at a steady twenty to twenty-five miles per hour, carrying four passengers.
To the casual observer there was nothing unusual about the way the approaching vehicle was being driven.
The road was particularly rough, undulating up and down through a landscape dotted with cactus and scrubby desert vegetation.
The car bounced and wove, and all four occupants were wearing distinctive crash helmets.
The Touareg was plastered with decals like a contestant in the Baja 1000 off-road race.
It was also festooned with five curious sensors perched at the front of the roof, each with an unobstructed view of the road.
Other sensors, including several radars, also sprouted from the roof.
A video camera peered out through the windshield.
A tall whip antenna set at the back of the vehicle, in combination with the
sensors, conspired to give a postapocalyptic vibe reminiscent of a Mad Max movie.

The five sensors on the roof were actually mechanical contraptions, each rapidly sweeping an infrared laser beam back and forth over the road ahead.
The beams, invisible to the eye, constantly reflected off the gravel road and the desert surrounding the vehicle.
Bouncing back to sensors, the lasers provided a constantly changing portrait of the surrounding landscape accurate to the centimeter.
Even small rocks on the road hundreds of feet ahead could not escape the unblinking gaze of the sensors, known as lidar.

The Touareg was even more peculiar inside.
The driver, Sebastian Thrun, a roboticist and artificial intelligence researcher, wasn’t driving.
Instead he was gesturing with his hands as he chatted with the other passengers.
His eyes rarely watched the road.
Most striking of all: his hands never touched the steering wheel, which twitched back and forth as if controlled by some unseen ghost.

Sitting behind Thrun was another computer researcher, Mike Montemerlo, who wasn’t driving either.
His eyes were buried in the screen of a laptop computer that was displaying the data from the lasers, radars, and cameras in a God’s-eye view of the world around the car in which potential obstacles appeared as a partial rainbow of blips on a radar screen.
It revealed an ever-changing cloud of colored dots that in aggregate represented the road unfolding ahead in the desert.

The car, named Stanley, was being piloted by an ensemble of software programs running on five computers installed in the trunk.
Thrun was a pioneer of an advanced version of a robotic navigation technique known as SLAM, which stands for simultaneous localization and mapping.
It had become a standard tool for robots to find their way through previously unexplored terrain.
The wheel continued to twitch back and forth as the car rolled along the rutted road lined with cactus and frequent outcroppings of boulders.
Immediately to Thrun’s
right, between the front seats, was a large red E-Stop button to override the car’s autopilot in an emergency.
After a half-dozen miles, the robotic meanderings of the Touareg felt anticlimactic.
Stanley wasn’t driving down the freeway, so as the desert scenery slid by, it seemed increasingly unnecessary to wear crash helmets for what was more or less a Sunday drive in the country.

The car was in training to compete in the Pentagon’s second Grand Challenge, an ambitious autonomous vehicle contest intended to jump-start technology planned for future robotic military vehicles.
At the beginning of the twenty-first century, Congress instructed the U.S.
military to begin designing autonomous vehicles.
Congress even gave the Pentagon a specific goal: by 2015, one-third of the army’s vehicles were supposed to go places without human drivers present.
The directive wasn’t clear as to whether both autonomous and remotely teleoperated vehicles would satisfy the requirement.
In either case the idea was that smart vehicles would save both money and soldiers’ lives.
But by 2004, little progress had been made, and Tony Tether, the then controversial director of the Pentagon’s blue-sky research arm, DARPA, the Defense Advanced Research Projects Agency, came up with a high-profile contest as a gambit to persuade computer hackers, college professors, and publicity-seeking corporations to innovate where the military had failed.
Tether was a product of the military-industrial complex, and the contest itself was a daring admission that the defense contracting world was not able to get the job done.
By opening the door for ragtag teams of hobbyists, Tether ran the risk of undermining the classified world dominated by the Beltway Bandits that surround Washington, D.C., and garner the lion’s share of military research dollars.

The first Grand Challenge contest, held in 2004, was something of a fiasco.
Vehicles tipped over, drove in circles, and ignominiously knocked down fences.
Even the most successful entrant had gotten stuck in the dust just seven miles from
the starting line in a 120-mile race, with one wheel spinning helplessly as it teetered off the edge of the road.
When the dust settled, a reporter flying overhead in a light plane saw brightly colored vehicles scattered motionless over the desert floor.
At the time it seemed obvious that self-driving cars were still years away, and Tether was criticized for organizing a publicity stunt.

Now, just a little more than a year later, Thrun was behind the wheel in a second-generation robot contestant.
It felt like the future had arrived sooner than expected.
It took only a dozen miles, however, to realize that techno-enthusiasm is frequently premature.
Stanley crested a rise in the desert and plunged smartly into a swale.
Then, as the car tilted upward, its laser guidance system swept across an overhanging tree limb.
Without warning the robot navigator spooked, the car wrenched violently first left, then right, and instantly plunged off the road.
It all happened faster than Thrun could reach over and pound the large red E-Stop button.

Luckily, the car found a relatively soft landing.
The Touareg had been caught by an immense desert thornbush just off the road.
It cushioned the crash landing and the car stopped slowly enough that the air bags didn’t deploy.
When the occupants surveyed the road from the crash scene, it was obvious that it could have been much worse.
Two imposing piles of boulders bracketed the bush, but the VW had missed them.

The passengers stumbled out and Thrun scrambled up on top of the vehicle to reposition the sensors bent out of alignment by the crash.
Then everyone piled back into Stanley, and Montemerlo removed an offending block of software code that had been intended to make the ride more comfortable for human passengers.
Thrun restarted the autopilot and the machine once again headed out into the Arizona desert.
There were other mishaps that day, too.
The AI controller had no notion of the consequence of mud puddles and later in the day Stanley found itself ensnared in a small lake in the middle
of the road.
Fortunately there were several human-driven support vehicles nearby, and when the car’s wheels began spinning helplessly, the support team of human helpers piled out to push the car out of the goo.

These were small setbacks for Thrun’s team, a group of Stanford University professors, VW engineers, and student hackers among more than a dozen teams competing for a multimillion-dollar cash prize.
The day was a low point after which things improved dramatically.
Indeed, the DARPA contest would later prove to be a dividing line between a world in which robots were viewed as toys or research curiosities and one in which people began to accept that robots could move about freely.

S
tanley’s test drive was a harbinger of technology to come.
The arrival of machine intelligence had been forecast for decades in the writings of science-fiction writers, so much so that when the technology actually began to appear, it seemed anticlimactic.
In the late 1980s, anyone wandering through the cavernous Grand Central Station in Manhattan would have noticed that almost a third of the morning commuters were wearing Sony Walkman headsets.
Today, of course, the Walkmans have been replaced by Apple’s iconic bright white iPhone headphones, and there are some who believe that technology haute couture will inevitably lead to a future version of Google Glass—the search engine maker’s first effort to augment reality—or perhaps more ambitious and immersive systems.
Like the frog in the pot, we have been desensitized to the changes wrought by the rapid increase and proliferation of information technology.

The Walkman, the iPhone, and Google Glass all prefigure a world where the line between what is human and who is machine begins to blur.
William Gibson’s
Neuromancer,
the science-fiction novel that popularized the idea of cyberspace,
drew a portrait of a new cybernetic territory composed of computers and networks.
It also painted a future in which computers were not discrete boxes, but would be woven together into a dense fabric that was increasingly wrapped around human beings, “augmenting” their senses.

It is not such a big leap to move from the early-morning commuters wearing Sony Walkman headsets, past the iPhone users wrapped in their personal sound bubbles, directly to Google Glass–wearing urban hipsters watching tiny displays that annotate the world around them.
They aren’t yet “jacked into the net,” as Gibson foresaw, but it is easy to assume that computing and communication technology is moving rapidly in that direction.

Gibson was early to offer a science-fiction vision of what has been called “intelligence augmentation.”
He imagined computerized inserts he called “microsofts”—with a lowercase
m
—that could be snapped into the base of the human skull to instantly add a particular skill—like a new language.
At the time—several decades ago—it was obviously an impossible bit of science fiction.
Today his cyborg vision is something less of a wild leap.

In 2013 President Obama unveiled the BRAIN initiative, an effort to simultaneously record the activities of one million neurons in the human brain.
But one of the major funders of the BRAIN initiative is DARPA, and the agency is not interested in just reading
from
the brain.
BRAIN scientists will patiently explain that one of the goals of the plan is to build a
two-way
interface between the human brain and computers.
On its face, such an idea seems impossibly sinister, conjuring up images of the ultimate Big Brother and thought control.
At the same time there is a utopian implication inherent in the technology.
The potential future is perhaps the inevitable trajectory of human-computer interaction design, implicit in J.
C.
R.
Licklider’s 1960 manifesto, “Man-Computer Symbiosis,” where he foretold a more intimate collaboration between humans and machines.

While the world of
Neuromancer
was wonderful science fiction, actually entering the world that Gibson portrayed presents a puzzle.
On one hand, the arrival of cyborgs poses the question of what it means to be human.
By itself that isn’t a new challenge.
While technology may be evolving increasingly rapidly today, humans have always been transformed by technology, as far back as the domestication of fire or the invention of the wheel (or its eventual application to luggage in the twentieth century).
Since the beginning of the industrial era machines have displaced human labor.
Now with the arrival of computing and computer networks, for the first time machines are displacing “intellectual” labor.
The invention of the computer generated an earlier debate over the consequences of intelligent machines.
The new wave of artificial intelligence technologies has now revived that debate with a vengeance.

Mainstream economists have maintained that over time the size of the workforce has continued to grow despite the changing nature of work driven by technology and innovation.
In the nineteenth century, more than half of all workers were engaged in agricultural labor; today that number has fallen to around 2 percent—and yet there are more people working than ever in occupations outside of agriculture.
Indeed, even with two recessions, between 1990 and 2010 the overall workforce in the United States increased by 21 percent.
If the mainstream economists are correct, there is no economic cataclysm on a societal level due to automation in the offing.

However, today we are entering an era where humans can, with growing ease, be designed in or out of “the loop,” even in formerly high-status, high-income, white-collar professional areas.
On one end of the spectrum smart robots can load and unload trucks.
On the other end, software “robots” are replacing call center workers and office clerks, as well as transforming high-skill, high-status professions such as radiology.
In the future, how will the line be drawn between man and machine, and who will draw it?

BOOK: Machines of Loving Grace
5.62Mb size Format: txt, pdf, ePub
ads

Other books

Mistress Extreme by Alex Jordaine
Truly Mine by Amy Roe
MISTRESS TO THE MARQUIS by MARGARET MCPHEE,
Blackfoot Affair by Malek, Doreen Owens
Long Island Noir by Kaylie Jones
Newcomers by Lojze Kovacic
Ten Good Reasons by Lauren Christopher
On a Night Like This by Ellen Sussman
One Good Turn by Kate Atkinson