Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (25 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
12.22Mb size Format: txt, pdf, ePub
ads

Intelligence will also win the day in the rapidly approaching future when we humans aren’t the most intelligent creatures around. Why wouldn’t it? When has a technologically primitive people prevailed over a more advanced? When has a less intelligent species prevailed over a brainier? When has an intelligent species even kept a marginally intelligent species around, except as pets? Look at how we humans treat our closest relatives, the Great Apes—chimpanzees, orangutans, and gorillas. Those that are not already bush meat, zoo inmates, or show biz clowns are endangered and living on borrowed time.

Certainly, as Granger says, no artificial systems do better than humans at recognizing faces, learning, and language. But in narrow fields AI is blindingly, dolorously powerful. Think about a being that has all that power at its command, and think about it being truly, roundly intelligent. How long will it be satisfied to be our tool? After a tour of Google, Inc.’s headquarters, historian George Dyson had this to say about where such a superintelligent being might live:

For thirty years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI. There wouldn’t be any need for True Believers, or the downloading of human brains or anything sinister like that: just a gradual, gentle, pervasive and mutually beneficial contact between us and a growing something else. This remains a nontestable hypothesis, for now.

Dyson goes on to quote science fiction writer Simon Ings:

“When our machines overtook us, too complex and efficient for us to control, they did it so fast and so smoothly and so usefully, only a fool or a prophet would have dared complain.”

 

Chapter Thirteen

Unknowable by Nature

Both because of its superior planning ability and because of the technologies it could develop, it is plausible to suppose that the first superintelligence would be very powerful. Quite possibly, it would be unrivalled: it would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Even a “fettered superintelligence” that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement by persuading its handlers to release it. There is even some preliminary experimental evidence that this would be the case.

—Nick Bostrom, Future of Humanity Institute, Oxford University

With AI advancing on so many fronts, from Siri to Watson, to OpenCog and LIDA, it’s hard to make the case that achieving AGI will fail because the problem is too hard. If the computer science approach doesn’t hack it, reverse engineering the brain will, though on a longer time line. That’s Rick Granger’s goal: understanding the brain from the bottom up, by replicating the brain’s most fundamental structures in computer programs. And he can’t help but blow raspberries at researchers working from top down cognitive principals, using computer science.

“They’re studying human behavior and trying to see if they can imitate that behavior with a computer. In all fairness, this is a bit like trying to understand a car without looking under the hood. We think we can write down what intelligence is. We think we can write down what learning is. We think we can write down what adaptive abilities are. But the only reason we even have any conception of those things is because we observe humans doing ‘intelligent’ things. But just seeing humans do it does not tell us in any detail what it is that they’re actually doing. The critical question is this: what’s the engineering specification for reasoning and learning? There are no engineering specs, so what are they working from except observation?”

And we are notoriously bad observers of ourselves. “A vast body of studies in psychology, neuroscience, and cognitive science show how over and over we are terrible at introspection,” Granger said. “We don’t have a clue about our own behaviors, nor the operations that underlie them.” Granger notes we’re also bad at making rational decisions, providing accurate eyewitness accounts, and remembering what just happened. But our limitations as observers don’t mean the cognitive sciences that rely on observation are all bunk. Granger just thinks they’re the wrong tools for penetrating intelligence.

“In computational neuroscience we’re saying ‘okay, what is it that the human brain
actually does
?’” Granger said. “Not what we
think
it does, not what we would
like
it to do. What does it actually do? And perhaps those will give us the definitions of intelligence, the definitions of adaptation, the definitions of language for the first time.”

Deriving computational principles from the brain starts by scientists examining what clusters of neurons in the brain do. Neurons are cells that send and receive electrochemical signals. Their most important parts are axons (fibers connecting neurons to each other that are usually the signal senders), synapses (the junction the signal crosses), and dendrites (generally the signal receivers). There are about a hundred billion neurons in the brain. Each neuron is connected to many tens of thousands of other neurons. This wealth of connections makes the brain’s operations massively parallel, not serial, like most computers. In computing terms, serial processing means sequential processing—executing one computation at a time. Parallel processing means a lot of data is handled concurrently—sometimes hundreds of thousands, even millions, of concurrent calculations.

For a moment imagine crossing a busy city street, and think of all the inputs of colors, sounds, smells, temperature, and foot-feel entering your brain through your ears, eyes, nose, limbs, and skin at the same time. If your brain wasn’t an organ that processed all that simultaneously, it would instantly be overtaxed. Instead, your senses gather all that input, process it through the neurons in your brain, and output behavior, such as staying in the crosswalk and avoiding other pedestrians.

Collections of neurons work together in circuits that are very much like electronic circuits. An electronic circuit conducts a current, and forces that current through wires and special components, such as resistors and diodes. In the process the current performs functions, like turning on a light, or starting a weed whacker. If you make a list of instructions that produce that function, or calculation, you have a computer program or algorithm.

Clusters of neurons in your brain form circuits that function as algorithms. And they don’t turn on lights but identify faces, plan a vacation, and type a sentence. All while operating in parallel. How do researchers know what’s going on in those neuron clusters? Simply put, they gather high-resolution data with neuroimaging tools ranging from electrodes implanted directly in the brains of animals, to neuroimaging tools such as PET and fMRI scans for humans. Neural probes inside and outside the skull can tell what individual neurons are doing, while marking neurons with electrically sensitive dyes shows when specific neurons are active. From these techniques and others arise testable hypotheses about the algorithms that govern the circuits of the brain. They’ve also begun determining the precise function of some parts of the brain. For more than a decade, for instance, neuroscientists have known that recognizing other people’s faces takes place in a part of the brain called the fusiform gyrus.

Now, where’s the beef? When computational systems are derived from the brain (the computational neuroscience approach) do they work better than those that are created de novo (the computer science approach)?

Well, one kind of brain-derived system, the artificial neural network, has been working so well for so long that it has become a backbone of AI. As we discussed in chapter 7, ANNs (which can be rendered in hardware or software) were invented in the 1960s to act like neurons. One of their chief benefits is that they can be taught. If you want to teach a neural net to translate text from French to English, for example, you can train it by inputting French texts and those texts’ accurate English translations. That’s called supervised learning. With enough examples, the network will recognize rules that connect French words to their English counterparts.

In brains, synapses connect neurons, and in these connections learning takes place. The stronger the synaptic connection, the stronger the memory. In ANNs, the strength of a synaptic connection is called its “weight,” and it’s expressed as a probability. An ANN will assign synaptic weights to foreign language translation rules it derives from its training. The more training, the better the translation. During training, the ANN will learn to recognize its errors, and adjust its own synaptic weights accordingly. That means a neural network is inherently self-improving.

After training, when French text is input, the ANN will refer to the probabilistic rules it derived during its training and output its best translation. In essence, the ANN is recognizing patterns in the data. Today, finding patterns in vast amounts of unstructured data is one of AI’s most lucrative jobs.

Besides language translation and data mining, ANNs are at work today in computer game AI, analyzing the stock market, and identifying objects in images. They’re in Optical Character Recognition programs that read the printed word, and in computer chips that steer guided missiles. ANNs put the “smart” in smart bombs. They’ll be critical to most AGI architectures as well.

And there’s something important to remember from chapter 7 about these ubiquitous neural nets. Like genetic algorithms, ANNs are “black box” systems. That is, the input, French language in our example, is transparent. And the output, here English, is understood. But what happens in between, no one understands. All the programmer can do is coach the ANN during training with examples, and try to improve the output. Since the output of “black box” artificial intelligence tools can’t ever be predicted, they can never be truly and verifiably safe.

Granger’s brain-derived algorithms offer results-based evidence that the best way to pursue intelligence might be to follow evolution’s model, the human brain, rather than cognitive science’s de novo systems.

In 2007, his Dartmouth College graduate students created a vision algorithm derived from brain research that identified objects 140 times faster than traditional algorithms. It beat out 80,000 other algorithms to win a $10,000 prize from IBM.

In 2010, Granger and colleague Ashok Chandrashekar created brain-derived algorithms for supervised learning. Supervised learning is used to teach machines optical character and voice recognition, spam detection, and more. Their brain-derived algorithms, created for use with a parallel processor, performed as accurately as serial algorithms doing the same job, but more than
ten times faster.
The new algorithms were derived from the most common types of neuron cluster, or circuit, in the brain.

In 2011, Granger and colleagues patented a reconfigurable parallel processing chip based on these algorithms. That means that some of the most common hardware in the brain can now be reproduced in a computer chip. Put enough of them together and, like IBM’s SyNAPSE program, you’ll be on your way to building a virtual brain. And just one of these chips today could accelerate and improve performance in systems designed to identify faces in crowds, find missile launchers in satellite photos, automatically label your sprawling digital photo collection, and hundreds of other tasks. In time, deriving brain circuits may lead to healing damaged brains by building components that augment or replace affected regions. One day, the parallel processing chip Granger’s team has patented could replace broken brain wetware.

Meanwhile, brain-derived software is working its way into traditional computing processes. The basal ganglia is an ancient “reptilian” part of the brain tied to motor control. Researchers have found that the basal ganglia uses reinforcement learning-type algorithms to acquire skills. Granger’s team has discovered that circuits in the cerebral cortex, the most recent addition to the brain, create hierarchies of facts
and
create relationships among facts, similar to hierarchical databases. These are two different mechanisms.

Now here’s where it gets exciting. Circuits in these two parts of the brain, the basal ganglia and cortex, are connected by other circuits, combining their proficiencies. A direct parallel exists in computing. Computer reinforcement learning systems operate by trial and error—they must test huge numbers of possibilities in order to learn the right answer. That’s the primary way
we
use the basal ganglia to learn habits, like how to ride a bike or hit a baseball.

But humans also have that cortical hierarchical system, which enables us to not just blindly search through all the trial-and-error possibilities, but instead to catalogue them, hierarchically organize them, and much more intelligently sift the possibilities. The combination works far faster, and brings far better solutions than in animals, such as reptiles, which solely use the basal ganglia trial-and-error system.

Perhaps the most advanced thing we can do with the combined cortical-basal ganglia system is to run
internal
trial-and-error tests, without even having to externally test all of them. We can run a lot of them by just thinking them through: simulating inside our heads. Artificial algorithms that combine these methods perform far better than either method on its own. Granger hypothesizes that that’s very much like the advantage conferred by the combined systems in our brains.

Granger and other neuroscientists have also learned that just a few kinds of algorithms govern the circuits of the brain. The same core computational systems are used again and again in different sensory and cognitive operations, such as hearing and deductive reasoning. Once these operations are re-created in computer software and hardware, perhaps they can simply be duplicated to create modules to simulate different parts of the brain. And, re-creating the algorithms for, say, hearing, should yield better performing voice recognition applications. In fact, this has already happened.

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
12.22Mb size Format: txt, pdf, ePub
ads

Other books

Savage Run by C. J. Box
Broken Wings by Sandra Edwards
Seal of the King by Ralph Smith
The Contradiction of Solitude by A. Meredith Walters