The recent resignation of Mauro Ferrari as president of the European Research Council has thrown into sharp relief the distinction between “basic” and “applied” research, as understood today.
In the UK, this is particularly topical given that it coincides with the start of parliamentary scrutiny over the role that the government’s proposed “high risk, high reward” research agency might play in the nation’s research ecology. But given the squeeze on global public finances across the world that will inevitably extend into the foreseeable future following the Covid-19 pandemic, the time is ripe for a radical, multinational rethink of why taxpayers should be funding research at all.
According to the prevailing science policy mythology, basic research provides the securest route to generating applications of large-scale, long-term public benefit. The myth has been facilitated by a rather flexible conception of “research impact” that has made great play of “unforeseen benefits” over an indefinite period of time. The exact grounding of this myth has varied internationally, but the one with greatest totemic status involved the establishment of the US National Science Foundation (NSF) after the Second World War.
This was inspired by Massachusetts Institute of Technology vice-president Vannevar Bush’s The Endless Frontier, which explained the building of the war-ending atomic bomb in terms of the critical mass of distinguished physicists that mobilised behind the cause. However, these researchers did not spontaneously self-organise into the Manhattan Project. Rather, responding to rumours of a Nazi atomic bomb project, the US government, in consultation with scientists such as Princeton-based Albert Einstein, set the parameters of the project, including eligibility for participation.
But the scientists then went about the project in an unprecedentedly free way. It resulted in massive cost overruns, relatively little oversight and high levels of uncertainty. Eventually, a bomb was successfully detonated in a New Mexico desert, but to what extent is this impressive achievement correctly described as a triumph of “basic research left to its own devices”?
Bush and others backing the version of the NSF that Congress passed in 1950 certainly subscribed to that view. But, more importantly, they presumed that for basic research to be “free”, it must be devolved to the peer review processes that normally govern discipline-based academic work. However, the Manhattan Project was neither the product of discipline-based academic work nor the straightforward application of such work. It was a profoundly interdisciplinary project that involved not only physicists but also engineers and medical professionals. It took all concerned way outside their intellectual comfort zones.
A more appropriate model for thinking about research of this sort – as well as for the proposed new UK research agency – is what Donald Stokes, a pioneer of empirical voter studies, called “Pasteur’s Quadrant”. Stokes had developed a 2×2 matrix of relationships between “basic” and “applied” research in the 1990s as part of a prospectus on the possible directions for post-Cold War US science policy. He recognised that Louis Pasteur’s long-term contributions to science – not least in such pandemic-relevant fields as epidemiology and public health – was a case of “applied” concerns serving to steer “basic” research, rather than the other way around.
Moreover, Pasteur was hardly unique. In the 20th century, the great private foundations (such as Rockefeller) and corporate R&D units (such as Bell Labs) were the main drivers of the signature breakthroughs in molecular biology, behavioural science and neuroscience, as well as information and communication technology, including artificial intelligence.
Of course, the researchers involved were academically well trained. More importantly, academia was central to the normalisation of these breakthroughs into the curriculum, so that many more than the original funders could benefit. However, when it comes to providing an environment for the actual conduct and evaluation of such cutting-edge research, the record of universities – and especially of established academic disciplines – has been chequered, to say the least.
The complaints of academic innovators about their home turf are legion and largely justified. They go beyond lack of time and funds. Peer review itself routinely confuses assessments of the validity of work judged on its own terms and in terms of some larger discipline-based agenda that, in the end, may matter only to other academics.
The UK’s new funding agency is, of course, based on the US Defense Advanced Research Projects Agency (Darpa). The right lesson to take from the Manhattan Project would have been to establish Darpa immediately, rather than the NSF. Darpa eventually did get created, of course, but almost a decade later, and in direct response to the Soviet Union’s launch of Sputnik 1 in 1957. By that time, the basic/applied science policy mythology had already set in.
The UK’s Darpa should be seen as a direct challenge to this mythology. In the context of tightened public funding, why should we presume that “basic research” of the truly fundamental sort is more likely to come from the disciplinary agendas of self-appointed “basic researchers” than from more organised responses to external exigencies agreed by scientists, governments and the public?
Author Bio: Steve Fuller is Auguste Comte Professor of social epistemology at the University of Warwick.