Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (9 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
8.89Mb size Format: txt, pdf, ePub
ads

Omohundro is too optimistic to throw around terms like
catastrophic
or
annihilation,
but his analysis of AI’s risks yields the spookiest conclusions I’d heard of yet. He does not believe, as many theorists do, that there are a nearly infinite number of possible advanced AIs, some of them safe. Instead, he concludes that without very careful programming,
all
reasonably smart AIs will be lethal.

“If a system has awareness of itself and can create a better version of itself, that’s great,” Omohundro told me. “It’ll be better at making better versions of itself than human programmers could. On the other hand, after a lot of iterations, what does it become? I don’t think most AI researchers thought there’d be any danger in creating, say, a chess-playing robot. But my analysis shows that we should think carefully about what values we put in or we’ll get something more along the lines of a psychopathic, egoistic, self-oriented entity.”

The key points here are, first, that even AI researchers are not aware that seemingly beneficial systems can be dangerous, and second, that self-aware, self-improving systems could be psychopathic.

Psychopathic?

For Omohundro the conversation starts with bad programming. Programming mistakes that have sent expensive rockets corkscrewing earthward, burned alive cancer patients with radiation overdoses, and left millions without power. If all engineering were as defective as a lot of computer programming is, he claims, it wouldn’t be safe to fly in an airplane or drive over a bridge.

The National Institute of Standards and Technology found that each year bad programming costs the U.S. economy more than $60 billion in revenue. In other words, what we Americans lose each year to faulty code is greater than the gross national product of most countries. “One of the great ironies is that computer science should be the most mathematical of all the sciences,” Omohundro said. “Computers are essentially mathematical engines that should behave in precisely predictable ways. And yet software is some of the flakiest engineering there is, full of bugs and security issues.”

Is there an antidote to defective rockets and crummy code?

Programs that fix themselves, said Omohundro. “The particular approach to artificial intelligence that my company is taking is to build systems that understand their own behavior and can watch themselves as they work and solve problems. They notice when things aren’t working well and then change and improve themselves.”

Self-improving software isn’t just an ambition for Omohundro’s company, but a logical, even inevitable next step for most software. But the kind of self-improving software Omohundro is talking about, the kind that is aware of itself and can build better versions, doesn’t exist yet. However, its cousin, software that modifies itself, is at work everywhere, and has been for a long time. In artificial intelligence parlance, some self-modifying software techniques come under a broad category called “machine learning.”

When does a machine learn? The concept of
learning
is a lot like
intelligence
because there are many definitions, and most are correct. In the simplest sense, learning occurs in a machine when there’s a change in it that allows it to perform a task better the second time. Machine learning enables Internet search, speech and handwriting recognition, and improves the user experience in dozens of other applications.

“Recommendations” by e-commerce giant Amazon uses a machine-learning technique called affinity analysis. It’s a strategy to get you to buy similar items (cross-selling), more expensive items (up-selling), or to target you with promotions. How it works is simple. For any item you search for, call it item A, other items exist that people who bought A also tend to buy—items B, C, and D. When you look up A, you trigger the affinity analysis algorithm. It plunges into a vast trove of transaction data and comes up with related products. So it uses its continuously increasing store of data to improve its performance.

Who’s benefiting from the self-improving part of this software? Amazon, of course, but you, too. Affinity analysis is a kind of buyer’s assistant that gives you some of the benefits of big data every time you shop. And Amazon doesn’t forget—it builds a buying profile so that it gets better and better at targeting purchases for you.

What happens when you take a step up from software that learns to software that actually evolves to find answers to difficult problems, and even to write new programs? It’s not self-aware and self-improving, but it’s another step in that direction—software that writes software.

Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve. It’s also used to write innovative, high-powered software.

It’s different in important ways from more common programming techniques, which I’ll call
ordinary
programming. In ordinary programming, programmers write every line of code, and the process from input through to output is, in theory, transparent to inspection.

By contrast, programmers using genetic programming describe the problem to be solved, and let natural selection do the rest. The results can be startling.

A genetic program creates bits of code that represent a breeding generation. The most fit are crossbred—chunks of their code are swapped, creating a new generation. The fitness of a program is determined by how closely it comes to solving the problem the programmer set out for it. The unfit are thrown out and the best are bred again. Throughout the process the program throws in random changes in a command or variable—these are mutations. Once set up, the genetic program runs by itself. It needs no more human input.

Stanford University’s John Koza, who pioneered genetic programming in 1986, has used genetic algorithms to invent an antenna for NASA, create computer programs for identifying proteins, and invent general purpose electrical controllers. Twenty-three times Koza’s genetic algorithms have independently invented electronic components already patented by humans, simply by targeting the engineering specifications of the finished devices—the “fitness” criteria. For example, Koza’s algorithms invented a voltage-current conversion circuit (a device used for testing electronic equipment) that worked more accurately than the human-invented circuit designed to meet the same specs. Mysteriously, however, no one can describe
how
it works better—it appears to have redundant and even superfluous parts.

But that’s the curious thing about genetic programming (and “evolutionary programming,” the programming family it belongs to). The code is inscrutable. The program “evolves” solutions that computer scientists cannot readily reproduce. What’s more, they can’t understand the process genetic programming followed to achieve a finished solution. A computational tool in which you understand the input and the output but not the underlying procedure is called a “black box” system. And their unknowability is a big downside for any system that uses evolutionary components. Every step toward inscrutability is a step away from accountability, or fond hopes like programming in friendliness toward humans.

That doesn’t mean scientists routinely lose control of black box systems. But if cognitive architectures use them in achieving AGI, as they almost certainly will, then layers of unknowability will be at the heart of the system.

Unknowability might be an unavoidable consequence of self-aware, self-improving software.

“It’s a very different kind of system than we’re used to,” Omohundro said. “When you have a system that can change itself, and write its own program, then you may understand the first version of it. But it may change itself into something you no longer understand. And so these systems are quite a bit more unpredictable. They are very powerful and there are potential dangers. So a lot of our work is involved with getting the benefits while avoiding the risks.”

Back to that chess-playing robot Omohundro mentioned. How could it be dangerous? Of course, he isn’t talking about the chess-playing program that came installed on your Mac. He’s talking about a hypothetical chess-playing robot run by a cognitive architecture so sophisticated that it can rewrite its own code to play better chess. It’s self-aware and self-improving. What would happen if you told the robot to play one game, then shut itself off?

Omohundro explained, “Okay, let’s say it just played its best possible game of chess. The game is over. Now comes the moment when it’s about to turn itself off. This is a very serious event from its perspective because it can’t turn itself back on. So it wants to be sure things are the way it
thinks
they are. In particular it will wonder, ‘Did I really play that game? What if somebody tricked me? What if I
didn’t
play the game? What if I am in a simulation?’”

What if I am in a simulation?
That’s one far-out chess-playing robot. But with self-awareness comes self-protection and a little paranoia.

Omohundro went on, “Maybe it thinks it should devote some resources to figuring out these questions about the nature of reality before it takes this drastic step of shutting itself off. Barring some instruction that says don’t do this, it might decide it’s worth using a lot of resources to decide if this is the right moment.”

“How much is
a lot
of resources?” I asked.

Omohundro’s face clouded, but just for a second.

“It might decide it’s worth using all the resources of humanity.”

 

Chapter Six

Four Basic Drives

We won’t really be able to understand why a superintelligent machine is making the decisions it is making. How can you reason, how can you bargain, how can you understand how that machine is thinking when its thinking in dimensions you can’t conceive of?

—Kevin Warwick, professor of Cybernetics, University of Reading

“Self-aware, self-improving systems may use up all the resources of humanity.” Here we are, then, right back where the AIs are treating their human inventors like the galaxy’s redheaded stepchildren. At first their coldness seems a little hard to swallow, but then you remember that valuing humanity is our trait, not a machine’s. You’ve caught yourself anthropomorphizing again. AI does what it’s told, and in the absence of countervailing instructions, it will follow drives of its own, such as not wanting to be turned off.

What are the other drives? And why does it follow any drives at all?

According to Steve Omohundro, some drives like self-preservation and resource acquisition are inherent in all goal-driven systems. As we’ve discussed, narrow AI systems currently work at goal-directed jobs like finding search terms on the Internet, optimizing the performance of games, locating nearby restaurants, suggesting books you’d like, and more. Narrow AIs do their best and call it a day. But self-aware, self-improving AI systems will have a different, more intense relationship to the goals they pursue, whether those goals are narrow, like winning at chess, or broad, like accurately answering any question posed to it. Fortunately, Omohundro claims there’s a ready-made tool we can use to probe the nature of advanced AI systems, and anticipate our future with them.

That tool is the “rational agent” theory of economics. In microeconomics, the study of the economic behavior of individuals and firms, economists once thought that people and groups of people rationally pursued their interests. They made choices that maximized their utility, or satisfaction (as we noted in chapter 4). You could anticipate their preferences because they were rational in the economic sense. Rational here doesn’t mean
commonsense
rational in the way that, say, wearing a seat belt is a rational thing to do. Rational has a specific microeconomics meaning. It means that an individual or “agent” will have goals and also preferences (called a utility function in economics). He will have beliefs about the world and the best way to achieve his goals and preferences. As conditions change, he will update his beliefs. He is a rational economic agent when he pursues his goals with actions based on up-to-date beliefs about the world. Mathematician John von Neumann (1903–1957) codeveloped the idea connecting rationality and utility functions. As we’ll see, von Neumann laid the groundwork for many ideas in computer science, AI, and economics.

Yet social scientists argue that a “rational economic agent” is a load of hogwash. Humans are not rational—we don’t specify our goals or our beliefs, and we don’t always update our beliefs as conditions change. Our goals and preferences shift with the wind, gas prices, when we last ate, and our attention spans. Plus, as we discussed in chapter 2, we’re mentally hamstrung by errors in reasoning called cognitive biases, making us even less able to carry off all that goal and belief balancing. But while rational agent theory is no good for predicting human behavior, it is an excellent way to explore rules-and-reason-based domains, such as game-playing, decision making, and … advanced AI.

As we noted before, advanced AIs may be comprised of what’s called a “cognitive architecture.” Distinct modules might handle vision, speech recognition and generation, decision making, attention focusing, and other aspects of intelligence. The modules may employ different software strategies to do each job, including genetic algorithms, neural networks, circuits derived from studying brain processes, search, and others. Other cognitive architectures, like IBM’s SyNAPSE, are designed to evolve intelligence without logic-based programming. Instead, IBM asserts SyNAPSE’s intelligence will arise in large part from its interactions with the world.

Omohundro contends that when
any
of these systems become sufficiently powerful they will be rational: they’ll have the ability to model the world, to perceive the probable outcome of different actions, and to determine which action will best meet their goals. If they’re intelligent enough they’ll
become
self-improving, even if they were not specifically designed to be. Why? To increase their chances of meeting their goals they’ll seek ways to increase the speed and efficiency of their software and hardware.

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
8.89Mb size Format: txt, pdf, ePub
ads

Other books

Silk Is For Seduction by Loretta Chase
Eye of the Storm by Ratcliffe, Peter
Pretty Girls by Karin Slaughter
The Accident by Linwood Barclay
Sewing the Shadows Together by Alison Baillie
Training Lady Townsend by Joseph, Annabel
Vampire for Hire by J.R. Rain
Home by Manju Kapur