Big pharma, “money-back guarantees,” and academic freedom

Expensive healthcare

Expensive healthcare

Let’s face it, the pharmaceutical industry has one heck of an image problem.  Even if it is recognized that the notorious and now-indicted Martin Shkreli, who raised the cost of a life-saving pill from $13.50 to $750, was really an exception, price-gouging and the accumulation of excessive profit are charges regularly and legitimately made against the industry.  To be fair, the cost of developing new medications is high and the risk of failure following years of investment in research efforts great, so the drug companies are not always whining when they claim that higher prices are essential to fund further development.  Which brings me to a remarkable proposal offered recently by Michael Rosenblatt, executive vice president and chief medical officer at Merck & Co. in the April 27 edition of the journal Science Translational Medicine

Rosenblatt is less concerned with his company’s investment in developing new products than he is with its utilization of more basic research often undertaken in universities.  Specifically, he is troubled by the growing evidence that some basic scientific studies yield results that turn out not to be reproducible by others.  Incorrect or irreproducible results pose a special problem for translational research—the kind drug companies do when they try to turn biological discoveries into actual medicines.  According to a report on Rosenblatt’s proposal in the MIT Technology Review,

Back in 2012, the biotechnology company Amgen dropped a bomb on academic science when it said it found only six of 53 “landmark” cancer papers stood up to efforts to reproduce the results of promising new research. Other studies that drug companies say can’t be replicated include one that found a cancer drug might treat Alzheimer’s and another that showed a particular gene was linked to diabetes in mice.

Rosenblatt says the costs of repeating wrong research are adding up. He says on average it takes “approximately two to six scientific personnel one to two years of work in an industry laboratory” to try to reproduce original experiments at an average cost of $500,000 to $2 million.

Writes Rosenblatt,

Research conducted in academia provides new insights into human pathophysiological mechanisms and identifies new targets for drug discovery. The public looks to collaborations between academia and industry, and then ultimately industry, to translate this research into innovations that improve health, namely new medicines and vaccines, as rapidly as possible. . . .  The diversion of much of a critical component of the translational effort into avenues that have little chance of success wastes not only a portion of the public’s investment in academic research and industry’s subsequent investment but also valuable time.

I’m no scientist, but it’s not hard to see how this could indeed be a problem.  But it’s also not hard to see the perhaps much bigger problems with Rosenblatt’s suggested remedy:  “I propose a potential approach to diminish the data irreproducibility problem at the academia-industry interface,” he writes,

What if universities stand behind the research data that lead to collaborative agreements with industry, and what if industry provides a financial incentive for data that can be replicated? Currently, industry expends and universities collect funding, even when the original data cannot be reproduced. In the instance of failure, collaborations dissolve, with resulting opportunity loss for both academia and industry. But what if universities offered some form of full or partial money-back guarantee?

In other words, Rosenblatt (and perhaps Merck itself) thinks punitive economic incentives are in order, meaning that if research that drug companies pay for turns out to be “wrong,” universities would have to give back the funding they got. He thinks this will put the pressure right where it belongs, on the scientists.

This, of course, smacks of a “heads I win, tails you lose” approach to funding.  For one thing, if a drug company claims that it failed to reproduce results provided by a university study that it funded, who’s to judge that this claim is not itself false?  Moreover, surely the problem of irreproducible results is not mainly, if at all, a product of ill intention or fraud.  But Rosenblatt touts his proposal’s alleged collateral benefits for universities and scholars.  “Academic investigators might be more careful to bring forward only data in which they had a high degree of confidence (in order to avoid retraction and financial loss for the university). Similarly, ‘early adopter’ research institutions might become preferred partners for industry, stimulating other universities to follow.”  But don’t these incentives exist already?  Rosenblatt seems to ignore the fact that his approach will be most likely to deter risk-taking and limit the ability of researchers in basic science to pursue potentially promising but uncertain paths of investigation.  If scientific studies are indeed often flawed — and I’m not at all sure that they are to the extent that Rosenblatt’s article claims — the culprits are not those readily addressed by his proposal: pressure to publish and win grants, careerism, poor training of students, and journals that don’t review reports rigorously enough.

Indeed, often, I suspect, the problem is that drug companies no less than the general public are all too willing to rush to judgment about conclusions that in fact were never really advanced — at least without qualification — by researchers themselves.  Moreover, as the MIT Technology Review report noted, “Suppressing and massaging negative results from drug trials isn’t uncommon and it’s a lot more likely to harm patients than bungled academic research. . . .  [I]n 2004, Merck had to recall the pain drug Vioxx and pay out billions in damages after it became clear that the pill posed a deadly risk the company knew all about.”

That article also pointed to the impracticality of Merck’s proposal:

It’s unlikely that universities will jump at Merck’s offer for more accountability. That’s because they are set up to collect R&D money, not return it. “The issue is certainly serious—but if this became a requirement it would stop [university-industry] research in its tracks,” says David Winwood, a business development executive at the Pennington Biomedical Research Center in Baton Rouge, Louisiana. “Few if any public schools would have either the (financial) capacity or, I suspect, the legal authority, to enter into such an agreement.”
The best protection against research error or fraud is not the imposition of perverse negative economic “incentives” by outside grantors, but instead the scrupulous control over externally funded research by knowledgeable faculty, an argument made at extensive length by the AAUP in the book-length Recommended Principles to Guide Academy-Industry Relationships, which offered 56 specific recommendations.  Of these the first two are primary:

PRINCIPLE 1—Faculty Governance: The university must preserve the primacy of shared academic governance in establishing campuswide policies for planning, developing, implementing, monitoring, and assessing all donor agreements and collaborations, whether with private industry, government, or nonprofit groups. Faculty, not outside sponsors, should retain majority control over the campus management of such agreements and collaborations.

PRINCIPLE 2—Academic Freedom, Autonomy, and Control: The university must preserve its academic autonomy—including the academic freedom rights of faculty, students, postdoctoral fellows, and academic professionals—in all its relationships with industry and other funding sources by maintaining majority academic control over joint academy-industry committees and exclusive academic control over core academic functions (such as faculty research evaluations, faculty hiring and promotion decisions, classroom teaching, curriculum development, and course content).

Of course, as far as the general public and the political establishment are concerned, almost all scientific research can potentially be deemed meaningless.  Today Inside Higher Ed reported that Sen. Jeff Flake (AZ-R) has issued a new report “criticizing 20 government-funded studies, each headlined with the question it set out to answer: Do drunk birds slur when they sing? Where does it hurt the most to be stung by a bee? Are Republicans or Democrats more disgusted by eating worms?”  Except, of course, Flake’s report totally misconstrues the nature and purpose of these studies to score a cheap political point.  Moreover, even apparently silly studies can have big impacts, which is precisely why Flake’s grandstanding — targeted really at the reduction of support for science more generally — as well as Rosenblatt’s proposed “money-back guarantee” are potentially so dangerous.

Inside Higher Ed‘s report called attention to something called the Golden Goose Award, established in 2012:

The award goes to strange-sounding federally funded research that led to groundbreaking results. Every year, the awards lead to headlines like “How a fluorescent jellyfish — and federal dollars — helped fight AIDS” and “Why ‘the sex life of the screwworm’ deserves taxpayer dollars.”

“We don’t know that what we do today — or what seems silly today — won’t have significant benefit in the future,” said Charles Snowdon, a psychology and zoology professor at the University of Wisconsin at Madison. . . .

But Flake’s report — flashy, colorful, sensationalist — doesn’t get at that nuance, Smith said. “What he’s put out is essentially clickbait.”

“Trying to provide a broader context for our research against a sexy headline or a bumper sticker, that’s just tough to do,” he added. “What burns through the debate is the bumper sticker.”

One commentator who has sought to provide a broader context for the debate over useful — and reproducible — research in science is the comedian John Oliver, whose recent discussion of science provides a fitting, informative and entertaining coda to this post and is not to be missed: