Licence to publish will restore trust in science


In the past few years, it has become clear that the quality of published scientific research is not as good as it could and should be.

It has been estimated that 50 per cent of published research across all scientific fields, including social science, is questionable – to varying degrees and for varying reasons. The repercussions of this for society can be profound: consider the measles outbreaks that have resulted recently from the faith some people still have in discredited studies claiming that vaccines can cause disorders such as autism.

Some argue that what has come to be known as the reproducibility crisis is the result of pressure on scientists to produce papers, leading them to prematurely publish data that have not been verified. Yet while there is undoubtedly some merit in that argument, I do not think that it is the main cause of the crisis. If it were, all scientific papers would provide evidence of well-designed, properly conducted and thoroughly analysed studies, even if they had not been replicated. Yet anyone who has peer-reviewed scientific articles is well aware that much research does not meet even basic standards of sound science. Even published science often contains serious flaws.

It is commonly assumed that if a scientist has successfully completed a PhD, he or she must be capable of planning and executing good research. But during my long career, I have met hundreds, if not thousands, of scientists with doctorates who lack basic training in such fundamental issues as hypothesis testing, experimental design, data analysis and scientific writing.

Scientific training needs to be significantly improved and extended. No one should be allowed to practise as a scientist – whether in academia, industry or government – unless they have undergone a new, standardised postdoctoral training programme and passed a rigorous exam, whose marks would preferably be published openly.

To scientists, this idea might seem radical, but just about all other serious professions operate on this basis. For example, a UK doctor will usually have spent five years at medical school to obtain their medical degree, followed by a two-year foundation programme during which work experience is combined with more training. After that, specialty training is required: presently, three years for a general practitioner and longer for some other specialties. Similarly, becoming a chartered accountant requires graduates to complete at least three years of on-the-job training, during which they need to pass a series of examinations.

It could be argued that science is a more varied profession than medicine or accountancy, making it more difficult to define a key set of knowledge and understanding that all researchers should possess. However, I do not believe this to be the case. There are underlying principles that all scientists should adhere to and that could readily be taught and tested via examinations. Examples would include hypothesis testing, integrity and experimental design.

To provide just one example, all scientists, be they biologists, chemists, physicists or mathematicians, should probably be able to identify the factors necessary for a successful clinical trial of a new drug, such as a large and varied sample of people, inclusion of an established drug as a positive control, double-blinding to eliminate bias and independent data analysis. In common with other professions, it may well be necessary also to have more specialised training and examinations geared to the scientist’s area of focus. But it would be eminently possible to design a set of examinations to provide a globally recognised badge of quality that ensures the scientist has knowledge and skills of the highest standard.

I would want all aspects of the training to be overseen by a professional body, just as it is for doctors (the General Medical Council) and accountants (the Association of Chartered Certified Accountants). But the training itself, as well as the examinations, could be provided by companies established specifically for the purpose, as also occurs in many other professions, including accountancy.

I recognise that it will take some time to implement my suggestion. However, organisations capable of conducting the necessary tasks, such as designing the training requirements, already exist, in the form of academies such as the Royal Society of Biology and the Institute of Physics.

While this work is being completed, further interim steps could be taken to improve the quality of published scientific research. A relatively easy one to implement would be to replace the conclusions section of scientific articles – which nearly always do nothing more than repeat the abstract – with a section titled “limitations”. Simply requiring scientists to admit that their studies have these would do a lot to focus their minds on the technical robustness of what they are presenting.

It will probably never be possible to ensure that all published research is robust and, hence, repeatable. However, taking these steps would contribute a great deal to the vital task of making scientists and science trustworthy again.

Author Bio: John Sumpter is a professor in the department of life sciences at Brunel University London.