The uses of being wrong

Share:

\"\"

My new book has an odd intellectual provenance—it starts with me being wrong. Back in the fall of 2008, I was convinced that the open global economic order, centered on the unfettered cross-border exchange of goods, services, and ideas, was about to collapse as quickly as Lehman Brothers.

A half-decade later, the closer I looked at the performance of the system of global economic governance, the clearer it became that the meltdown I had expected had not come to pass. Though the advanced industrialized economies suffered prolonged economic slowdowns, at the global level there was no great surge in trade protectionism, no immediate clampdown on capital flows, and, most surprisingly, no real rejection of neoliberal economic principles. Given what has normally transpired after severe economic shocks, this outcome was damn near miraculous.

Nevertheless, most observers have remained deeply pessimistic about the functioning of the global political economy. Indeed, scholarly books with titles like No One’s World: The West, The Rising Rest, and the Coming Global Turn and The End of American World Order have come to a conclusion the opposite of mine. Now I’m trying to understand how I got the crisis so wrong back in 2008, and why so many scholars continue to be wrong now.

Confessions of wrongness in academic research should be unsurprising. (To be clear, being wrong in a prediction is different from making an error. Error, even if committed unknowingly, suggests sloppiness. That carries a more serious stigma than making a prediction that fails to come true.) Anyone who has a passing familiarity with the social sciences is aware that, by and large, we do not get an awful lot of things right. Unlike that of most physical and natural scientists, the ability of social scientists to conduct experiments or rely on high-quality data is often limited. In my field, international relations, even the most robust econometric analyses often explain a pathetically small amount of the data’s statistical variance. Indeed, from my first exposure to the philosopher of mathematics Imre Lakatos, I was taught that the goal of social science is falsification. By proving an existing theory wrong, we refine our understanding of what our models can and cannot explain.

And yet, the falsification enterprise is generally devoted to proving why other scholars are wrong. It’s rare for academics to publicly disavow their own theories and hypotheses. Indeed, a common lament in the social sciences is that negative findings—i.e., empirical tests that fail to support an author’s initial hypothesis—are never published.

Even in the realm of theory, there are only a few cases of scholars’ acknowledging that the paradigms they’ve constructed do not hold. In 1958, Ernst Haas, a political scientist at the University of California at Berkeley, developed a theory of political integration, positing that as countries cooperated on noncontroversial issues, like postal regulations, that spirit of cooperation would spill over into contentious areas, like migration. Haas used this theory—he called it \”neofunctionalism\”—to explain European integration a half-century ago. By the 1970s, however, Europe’s march toward integration seemed to be going into reverse, and Haas acknowledged that his theory had become \”obsolete.\” This did not stop later generations of scholars, however, from resurrecting his idea once European integration was moving forward again.

Haas is very much the exception and not the rule. I’ve read a fair amount of international-relations theory over the years, from predictions about missing the great-power peace of the Cold War to the end of history to the rise of a European superpower to the causes of suicide terrorism. Most of these sweeping hypotheses have either failed to come true or failed to hold up over time. This has not prevented their progenitors from continuing to advocate them. Some of them echo the biographer who, without a trace of irony, proclaimed that \”proof of Trotsky’s farsightedness is that none of his predictions have come true yet.\”

The persistence of so-called \”zombie ideas\” is something of a problem in the social sciences. Even if a theory or argument has been discredited by others in the moment, a stout defense can ensure a long intellectual life. When Samuel P. Huntington published his \”clash of civilizations\” argument, in the 1990s, the overwhelming scholarly consensus was that he was wrong. This did not stop the \”clash\” theory from permeating policy circles, particularly after 9/11.

Why is it so hard for scholars to admit when they are wrong? It is not necessarily concern for one’s reputation. Even predictions that turn out to be wrong can be intellectually profitable—all social scientists love a good straw-man argument to pummel in a literature review. Bold theories get cited a lot, regardless of whether they are right.

Part of the reason is simple psychology; we all like being right much more than being wrong. As Kathryn Schulz observes in Being Wrong, \”the thrill of being right is undeniable, universal, and (perhaps most oddly) almost entirely undiscriminating … . It’s more important to bet on the right foreign policy than the right racehorse, but we are perfectly capable of gloating over either one.\”

Furthermore, as scholars craft arguments and find supporting evidence, they persuade themselves that they are right. Furthermore, the degree of relative self-confidence a scholar projects has an undeniable effect on how others perceive the argument. As much as published scholarship is supposed to count über alles, there is no denying that confident scholars can sway opinions. I know colleagues who make fantastically bold predictions, and I envy their serene conviction that they are right despite ample evidence to the contrary.

There can be rewards for presenting a singular theoretical framework. In Expert Political Judgment, Philip Tetlock notes that there are foxes (experts who adapt their mental models to changing circumstances) and hedgehogs (experts who keep their worldviews fixed and constant). Tetlock, a professor of psychology at the University of Pennsylvania, found that foxes are better than hedgehogs at predicting future events—but that hedgehogs are more likely to make the truly radical predictions that turn out to be right.

That said, the benefits of being wrong are understated. Schulz argues in Being Wrong that \”the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change.\”

Indeed, part of the reason the United States embraced more-expansionary macroeconomic policies in response to the 2008 financial crisis is that conservative economists like Martin Feldstein and Kenneth Rogoff went against their intellectual predilections and endorsed (however temporarily) a Keynesian approach.

It is possible that scholars will become increasingly likely to admit being wrong. Blogging and tweeting encourages the airing of contingent and tentative arguments as events play out in real time. As a result, far less stigma attaches to admitting that one got it wrong in a blog post than in peer-reviewed research. Indeed, there appears to be almost no professional penalty for being wrong in the realm of political punditry. Regardless of how often pundits make mistakes in their predictions, they are invited back again to pontificate more.

As someone who has blogged for more than a decade, I’ve been wrong an awful lot, and I’ve grown somewhat more comfortable with the feeling. I don’t want to make mistakes, of course. But if I tweet or blog my half-formed supposition, and it then turns out to be wrong, I get more intrigued about why I was wrong. That kind of empirical and theoretical investigation seems more interesting than doubling down on my initial opinion. Younger scholars, weaned on the Internet, more comfortable with the push and pull of debate on social media, may well feel similarly.

For all the intellectual benefits of being incorrect, however, how one is wrong matters. It is much less risky to predict doom and gloom than to predict that things will work out fine. Warnings about disasters that never happen carry less cost to one’s reputation than asserting that all is well just before a calamity. History has stigmatized optimistic prognosticators who, in retrospect, turned out to be wrong. From Norman Angell (who, in 1909, argued that war among European powers was unlikely) onward, errant optimists have been derided for their naïveté. If the global economy tanks or global economic governance collapses in the next few years, I’ll be wrong again—and in the worst way possible.

Author Bio: Daniel W. Drezner is a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University. His latest book, The System Worked: How the World Stopped Another Great Depression, is just out from Oxford University Press.

Tags: