Read Denialism Online

Authors: Michael Specter

Tags: #Science

Denialism (5 page)

BOOK: Denialism
2.07Mb size Format: txt, pdf, ePub
ads

By the end of 2001, however, Topol had moved on. He had begun to focus primarily on genomics and his research switched from treating heart attacks to preventing them. Others were studying whether Vioxx increased the risk of heart attacks, and he felt he had done all he could to address the issue. At the time, Topol had no idea how divisive Vioxx had become at Merck itself. It turned out that scientists there had worried as early as 1996 about the effect the drug would have on the cardiovascular system. The reason for that concern was clear. Vioxx altered the ratio of two crucial substances, the hormone prostacyclin and a molecule called thromboxane, which together help balance blood flow and its ability to clot properly. Suppressing prostacyclin reduces inflammation and pain, and that made Vioxx work. Suppress it too powerfully, however, and thromboxane can cause increased blood pressure and too much clotting, either of which can lead to heart attacks. By 2002, Merck decided to embark on a major study of the cardiovascular risks caused by Vioxx—just as Topol and his colleagues had suggested. The trial would have produced useful data fairly rapidly, but just before it began, the company abruptly scuttled the project. In the end, Merck never made any significant effort to assess the cardiovascular risk posed by its most successful product.

Instead, the company issued a “Cardiovascular Card” to sales representatives. More than three thousand members of the sales force were instructed to refer doctors with questions to the card, which claimed—falsely—that Vioxx was eight to eleven times safer than other similar painkillers. The sales reps were told to produce the card only when all else had failed, to help “physicians in response to their questions regarding the cardiovascular effects of Vioxx.” The FDA, realizing that doctors needed to understand the gravity of the findings from the VIGOR trial, issued a strongly worded letter instructing Merck to correct the record. “Your claim in the press release that Vioxx has a ‘favorable cardiovascular safety profile,’” the letter read in part, “is simply incomprehensible, given the rate of heart attack and serious cardiovascular events compared to naproxen.” The company reacted swiftly: “Do not initiate discussions of the FDA arthritis committee . . . or the results of the . . . VIGOR study,” the sales force was told. If doctors proved too querulous, the Merck representatives were instructed to respond by saying, “I cannot discuss the study with you.”

In the summer of 2004, the
Lancet
asked Topol and the gastroenterologist Gary W. Falk, a colleague of Topol’s from the Cleveland Clinic, to sum up the state of knowledge about Vioxx and similar drugs known, as a class, by the name coxibs. Their editorial was published that August under the title “A Coxib a Day Won’t Keep the Doctor Away. Topol was taken aback when he realized how little had changed. “It was amazing to see that nothing had been done in three years,” he recalled. “It was not even clear that Vioxx protected the stomach. It cost four dollars a day for these darn pills.” On September 29 of that year, Topol happened to dine with Roy Vagelos, Merck’s much-admired former chief executive, who had been retired for nearly a decade. Topol was visiting a New York-based biopharmaceutical company called Regeneron whose board Vagelos chairs. There were few people in medicine for whom Topol had as much respect. “We started talking about Vioxx,” he said. “It was the first time I ever spoke to Roy about it. I remember that conversation well: it was at Ruth’s Chris Steak House in Westchester. Roy went on for a while. He was entirely opposed to the Merck approach to Vioxx. And he didn’t mince words.”

The following morning a Merck cardiologist called Topol and told him the company was removing Vioxx from the market. Another trial had shown that patients taking the drug were at increased risk of heart attack and stroke. That study, APPROVe, began in 2000 as an attempt to discover whether Vioxx helped prevent the recurrence of colon polyps. It didn’t. “I was shocked,” Topol said. “But I thought that it was responsible for them to pull it. And Steve Nissen came down to my office, also very pleased, and said, ‘Isn’t that great? They are pulling the drug.’ We both thought it was the right thing to do.”

For Topol, that could well have been how the story ended. But Merck began to mount a press offensive. The message never varied: Merck put patients first. “Everything they had ever done in the course of Vioxx was putting patients first. All the data was out there,” Topol said, still stunned by the brazen public lies. “This just wasn’t true. It wasn’t right. I called and tried to speak to Ray Gilmartin”—Merck’s chief executive. “Neither he nor anyone else returned my calls.” (That itself was significant: after all, Topol ran one of the most important cardiology departments in the country; he was also the director of a Merck drug trial.) “This was a breach of trust that really rocked the faith people have in institutions like those,” Topol said. “We are talking about thousands of heart attacks. There were simply gross discrepancies in what they presented to the FDA and what was published in journals. I took them on. I had to.”

That week, Topol wrote an op-ed piece for the
New York Times
. He called it “Vioxx Vanished.” The
Times
had a better idea: “Good Riddance to a Bad Drug.” Noting that Vioxx increased the risk of heart attacks and strokes, Topol wrote that “our two most common deadly diseases should not be caused by a drug.” He also published a column in the
New England Journal of Medicine
, called “Failing the Public Health”: “The senior executives at Merck and the leadership at the FDA,” he wrote, “share responsibility for not having taken appropriate action and not recognizing that they are accountable for the public health.”

On December 3, 2005, in a videotaped deposition presented under subpeona at one of the many trials following the recall, Topol argued that Vioxx posed an “extraordinary risk.” A colleague from the Cleveland Clinic, Richard Rudick, told him that Gilmartin, the Merck CEO, had become infuriated by Topol’s public attacks and had complained bitterly to the clinic’s board about the articles in the
Times
and the
New England Journal of Medicine
. “What has Merck ever done to the clinic to warrant this?” Gilmartin asked.

Two days after that testimony, Topol received an early call telling him not to attend an 8 a.m. meeting of the board of governors. “My position—chief academic officer—had been abolished. I was also removed as provost of the medical school I founded.” The clinic released a statement saying that there was no connection between Topol’s Vioxx testimony and his sudden demotion, after fifteen years, from one of medicine’s most prominent positions. A spokeswoman for the clinic called it a simple reorganization. The timing, she assured reporters, was a coincidence.

DID THE RECALL of Vioxx, or any other single event, cause millions of Americans to question the value of science as reflexively as they had once embraced it? Of course not. Over the decades, as our knowledge of the physical world has grown, we have also endured the steady drip of doubt—about both the definition of progress and whether the pursuit of science will always drive us in the direction we want to go. A market disaster like Vioxx, whether through malice, greed, or simply error, presented denialists with a rare opportunity: their claims of conspiracy actually came true. More than that, in pursuit of profits, it seemed as if a much-admired corporation had completely ignored the interests of its customers.

It is also true, however, that spectacular technology can backfire spectacularly—and science doesn’t always live up to its expectations. When we see something fail that we had assumed would work, whether it’s a “miracle” drug or a powerful machine, we respond with fear and anger. People often point to the atomic bomb as the most telling evidence of that phenomenon. That’s not entirely fair: however much we may regret it, the bomb did what it was invented to do.

That wasn’t the case in 1984, when a Union Carbide pesticide factory in Bhopal, India, released forty-two tons of toxic methyl isocyanate gas into the atmosphere, exposing more than half a million people to deadly fumes. The immediate death toll was 2,259; within two weeks that number grew to more than eight thousand. Nor was it true two years later, when an explosion at Unit 4 of the V. I. Lenin Atomic Power Station transformed a place called Chernobyl into a synonym for technological disaster. They were the worst industrial accidents in history—one inflicting immense casualties and the other a worldwide sense of dread. The message was hard to misinterpret: “Our lives depend on decisions made by other people; we have no control over these decisions and usually we do not even know the people who make them,” wrote Ted Kaczynski, better known as the Unabomber, in his essay “Industrial Society and Its Future”—the Unabomber Manifesto. “Our lives depend on whether safety standards at a nuclear power plant are properly maintained; on how much pesticide is allowed to get into our food or how much pollution into our air; on how skillful (or incompetent) our doctor is. . . . The individual’s search for security is therefore frustrated, which leads to a sense of powerlessness.”

Kaczynski’s actions were violent, inexcusable, and antithetical to the spirit of humanity he professed to revere. But who hasn’t felt that sense of powerlessness or frustration? Reaping the benefits of technology often means giving up control. That only matters, of course, when something goes wrong. Few of us know how to fix our carburetors, or understand the mechanism that permits telephone calls to bounce instantly off satellites orbiting twenty-eight thousand miles above the earth only to land a split second later in somebody else’s phone on the other side of the world.

That’s okay; we don’t need to know how they function, as long as they do. Two hundred or even fifty years ago, most people understood their material possessions—in many cases they created them. That is no longer the case. Who can explain how their computer receives its constant stream of data from the Internet? Or understands the fundamental physics of a microwave? When you swallow antibiotics, or give them to your children, do you have any idea how they work? Or how preservatives are mixed into many of the foods we eat or why? The proportion of our surroundings that any ordinary person can explain today is minute—and it keeps getting smaller.

This growing gap between what we do every day and what we know how to do only makes us more desperate to find an easy explanation when something goes wrong. Denialism provides a way to cope with medical mistakes like Vioxx and to explain the technological errors of Chernobyl or Bhopal. There are no reassuring safety statistics during disasters and nobody wants to hear about the tens of thousands of factories that function flawlessly, because triumphs are expected, whereas calamities are unforgettable. That’s why anyone alive on January 28, 1986, is likely to remember that clear, cold day in central Florida, when the space shuttle
Challenger
lifted off from the Kennedy Space Center, only to explode seventy-three seconds later, then disintegrate in a dense white plume over the Atlantic. It would be hard to overstate the impact of that accident. The space program was the signature accomplishment of American technology: it took us to the moon, helped hold back the Russians, and made millions believe there was nothing we couldn’t accomplish. Even our most compelling disaster—the Apollo 13 mission—was a successful failure, ending with the triumph of technological mastery needed to bring the astronauts safely back to earth.

By 1986, America had become so confident in its ability to control the rockets we routinely sent into space that on that particular January morning, along with its regular crew, NASA strapped a thirty-seven-year-old high school teacher named Christa McAuliffe from Concord, New Hampshire, onto what essentially was a giant bomb. She was the first participant in the new Teacher in Space program. And the last.

The catastrophe was examined in merciless detail at many nationally televised hearings. During the most remarkable of them, Richard Feynman stunned the nation with a simple display of show-and-tell. Feynman, a no-nonsense man and one of the twentieth century’s greatest physicists, dropped a rubber O-ring into a glass of ice water, where it quickly lost resilience and cracked. The ring, used as a flexible buffer, couldn’t take the stress of the cold, and it turned out neither could one just like it on the shuttle booster rocket that unusually icy day in January. Like so many of our technological catastrophes, this was not wholly unforeseen. “My God, Thiokol, when do you want me to launch, next April?” Lawrence Mulloy, manager of the Solid Rocket Booster Project at NASA’s Marshall Space Flight Center, complained to the manufacturer, Morton Thiokol, when engineers from the company warned him the temperature was too low to guarantee their product would function properly.

SCIENTISTS HAVE NEVER BEEN good about explaining what they do or how they do it. Like all human beings, though, they make mistakes, and sometimes abuse their power. The most cited of those abuses are the twins studies and other atrocities carried out by Nazi doctors under the supervision of Josef Mengele. While not as purely evil (because almost nothing could be), the most notorious event in American medical history occurred not long ago: from 1932 to 1972, in what became known as the Tuskegee Experiment, U.S. Public Health Service researchers refused to treat hundreds of poor, mostly illiterate African American share-croppers for syphilis in order to get a better understanding of the natural progression of their disease. Acts of purposeful malevolence like those have been rare; the more subtle scientific tyranny of the elite has not been.

In 1883, Charles Darwin’s cousin Francis Galton coined the term “eugenics,” which would turn out to be one of the most heavily freighted words in the history of science. Taken from a Greek word meaning “good in birth,” eugenics, as Galton defined it, simply meant improving the stock of humanity through breeding. Galton was convinced that positive characteristics like intelligence and beauty, as well as less desirable attributes like criminality and feeblemindedness, were wholly inherited and that a society could breed for them (or get rid of them) as they would, say, a Lipizzaner stallion or a tangerine. The idea was that with proper selection of mates we could dispense with many of the ills that kill us—high blood pressure, for instance, or obesity, as well as many types of cancer.

BOOK: Denialism
2.07Mb size Format: txt, pdf, ePub
ads

Other books

Untouchable Darkness by Rachel van Dyken
Change of Heart by Wolf, Joan
The Inheritance by Zelda Reed
Somebody's Ex by Jasmine Haynes
For Want of a Fiend by Barbara Ann Wright