
They don’t just answer questions or write text. Language models like GPT , Claude , and Gemini execute code, analyze data, and even conduct experiments in robotic laboratories. Google has dubbed this idea ” co-scientist” : a virtual assistant capable of designing, planning, and executing complete experiments based on simple instructions in natural language.
This technology is already yielding results. In collaboration with universities such as Stanford and Imperial College , the co-scientist has identified previously unknown biological mechanisms, suggested potential treatments for diseases like liver fibrosis, and automated parts of the scientific discovery process. Other projects, such as Future House , follow a similar path, taking the automation of science to a level that would have seemed like science fiction just five years ago.
This revolution is accompanied by a shift in the habits of researchers themselves. A recent survey in Nature revealed that 81% of scientists already use tools like ChatGPT at some stage of their work: from writing articles to generating hypotheses or drafting funding proposals. The integration of artificial intelligence (AI) into science is advancing at an unprecedented pace, but our critical reflection on its impact is not keeping up.
Clear advantages, clear risks
AI can help us write better, overcome language barriers, and explore complex data. But it also introduces significant risks.
First, there is the problem of lost creativity. An analysis of more than 45 million articles and almost 4 million patents showed that, since the mid-20th century, the proportion of truly disruptive work has steadily declined.
Science is advancing, yes, but increasingly so in small steps rather than transformative leaps. If we start using language models to write proposals or generate ideas, we are likely to reinforce this trend: trained in past research, they tend to reproduce dominant approaches and avoid anything radically new.
An AI model can push Newton’s laws to their limits, but it wouldn’t invent the theory of relativity. It can write thousands of variations of an article on classical mechanics, but it wouldn’t ask whether Schrödinger’s cat is alive or dead because it would never have invented quantum mechanics.
A machine cannot imagine new ideas
Deep innovation requires intuition, imagination, and the ability to challenge paradigms—attributes that remain profoundly human today.
There are also ethical risks. AI can fabricate data, exaggerate results, or propose experiments based on false premises, without the user noticing.
It can even influence public opinion and scientific production on a massive scale, as happened with the sugar industry in the 1960s, when it promoted research that diverted attention from its health effects to blame fats .
With tools capable of generating persuasive text on an industrial scale, manipulation could be far more effective. Furthermore, if advanced platforms become concentrated in the hands of a few companies or countries, the capacity for scientific discovery could be monopolized, generating new forms of scientific and technological inequality.
What if a machine is both the author and the editor?
An even more unsettling scenario is the simultaneous delegation of writing and evaluating proposals to language models. This is not science fiction: a recent study shows that one in five researchers already uses AI in peer review , and between 7% and 17% of reviews at scientific conferences on AI have been significantly modified with these tools.
If one AI generates a proposal and another AI evaluates it, we enter a self-referential system where models reproduce their own biases and where human creativity is relegated. This could trap science in a spiral, negating the kind of transformative discovery that has characterized the great leaps in scientific history.
An ethical framework to protect science
To avoid these risks, we propose a series of ethical principles that allow the integration of major language models without compromising scientific integrity:
- Address biases systematically. AI is not neutral. It needs ongoing audits, interdisciplinary teams, and external mechanisms to detect biases invisible to experts themselves.
- Demand full transparency. Researchers must document data, parameters, and decisions made by the models, as well as use explainability techniques that allow us to understand how a conclusion was reached.
- Clarify attribution and intellectual property. The line between assistance and authorship is blurring. We need clear rules on what content is human-generated and what is generated by AI.
- Ensure human accountability. Everything produced by AI must be verified by scientists. There can be no unsupervised automated decisions.
- Protect transformative research. We must prevent AI from pushing science into complacency. Agencies must support risky, interdisciplinary, and radical projects.
- Redefining the role of the scientist. We must strengthen intuition, critical thinking, ethics, and long-term vision.
- Create adaptive governance systems. Technology is evolving too rapidly for static regulations. We need continuous and flexible oversight.
- Reduce dependence on proprietary models. Science cannot rely on a few commercial platforms. We must promote open, diverse, and resilient ecosystems.
AI can accelerate science in extraordinary ways . But if we don’t act carefully, it could also impoverish it, making it less creative, more unequal, and less reliable. At a time when the planet faces urgent challenges, we need powerful tools, yes, but also rigorous, transparent, and deeply human ones.
Author Bios: Sergio Hoyas Calvo is Professor of Aerospace Engineering, at the Polytechnic University of Valencia, Javier Garcia Martinez is Professor of Inorganic Chemistry, at the University of Alicante and Ricardo Vinuesa is Associate Professor of Aerospace Engineering at the University of Michigan