What artificial intelligence awaits us in 2025

Share:

Artificial intelligence (AI) is marking a before and after in the history of technology, and 2025 will bring more surprises. It is not easy to predict what awaits us, but it is easy to highlight trends and challenges that will define the immediate future of AI for the coming year. Among them, the challenge of the so-called “centaur doctor” or “centaur teacher”, key for those of us who are immersed in the development of AI.

The explosion of AI-based science

AI has become a fundamental tool for tackling major scientific challenges. Areas such as health, astronomy and space exploration, neuroscience or climate change, among others, will benefit even more than they already do.

AlphaFold (which won the Nobel Prize in 2024) has determined the three-dimensional structure of 200 million proteins, practically all of those known. Its development represents a significant advance in molecular biology and medicine . This facilitates the design of new drugs and treatments, and 2025 will mark the emergence of its use (free access, by the way).

ClimateNet, meanwhile, uses artificial neural networks to perform precise spatial and temporal analysis of large volumes of climate data, which is essential for understanding and mitigating global warming. The use of ClimateNet will be essential in 2025 to predict extreme weather events with greater accuracy.

Medical diagnoses and trials: the role of AI

Justice and medical diagnosis are considered high-risk scenarios. In these areas, it is more urgent than in any other field to establish systems so that humans always have the final decision .

As AI experts, we work to ensure user trust, that the system is transparent, that it protects people, and that humans are at the centre of decisions.

This is where the “centaur doctor” challenge comes into play. Centaurs are hybrid human-algorithm models that combine the formal analytics of machines and human intuition. A “centaur doctor + an AI system” improves the decisions made by humans on their own and by AI systems on their own. A doctor will always be the one to hit the accept button, and a judge will be the one to determine whether the sentence is fair.

The AI ​​that will make decisions for us

Autonomous AI agents based on language models are the goal for 2025 of large technology companies such as OpenAI (ChatGPT), Meta (LLaMA), Google (Gemini) or Anthropic (Claude).

Until now, these AI systems make recommendations; in 2025, they are expected to make decisions for us.

AI agents will perform personalized and precise actions in tasks that are not high risk, always adjusted to the user’s needs and preferences. For example: buying a bus ticket, updating the calendar, recommending a specific purchase and making it. They will also be able to answer our email, a task that takes up a lot of our daily time.

In this vein, OpenAI has launched AgentGPT and Google Gemini 2.0 , platforms for developing autonomous AI agents. For its part, Anthropic proposes two updated versions of its Claude language model: Haiku and Sonnet .

The use of our computer by AI

Sonnet can use a computer just like a person would. This means that he can move the cursor, click buttons, type text and navigate screens. It also enables functionality to automate our desktop. It allows users to grant Claude access and control over certain aspects of their personal computers, just like people do. This capability, dubbed “computer usage,” could revolutionize the way we automate and manage our everyday tasks.

In e-commerce, autonomous AI agents will be able to make a purchase for the user. They will provide advice on business decisions, automatically manage inventory, work with suppliers of all kinds, including logistics providers, to optimize the replenishment process, update shipping statuses to generate invoices, etc.

In the education sector, they will be able to personalize curricula for students. They will identify areas for improvement and suggest suitable learning resources. We will move towards the concept of a “centaur teacher,” assisted by AI agents in education.

The approve button

The notion of autonomous agents raises profound questions about the concept of “human autonomy and human control.” What does “autonomy” actually entail?

These AI agents will introduce the need for pre-approval. What decisions will we allow these entities to make without our direct approval (without human control)?

We are faced with a crucial dilemma: knowing when it is better to be “automatic” in the use of autonomous AI agents and when we need to make the decision, i.e., resort to “human control” or “human-AI interaction.”

The concept of pre-approval is going to become very important in the use of autonomous AI agents.

The small ChatGPTs that will enter the mobile

2025 will be the year of the expansion of small and open language models ( SLM ) . These are language models that in the future will be able to be installed on a mobile device, they will allow us to control our phone by voice in a much more personal and intelligent way than with assistants like Siri and they will respond to email for us.

SLMs are compact and more efficient, and do not require massive servers to operate. They are open source solutions that can be trained for specific application scenarios. They can be more respectful of users’ privacy and are ideal for use on low-cost computers and mobile phones.

They will be of interest for enterprise adoption. This will be feasible because SLMs will have a lower cost, greater transparency and, potentially, greater auditability.

SLMs will enable the integration of applications for medical recommendations, in education, for automatic translation, text summarization or instant spelling and grammar correction. All from small devices without the need for an internet connection.

Among their important social advantages, they can facilitate the use of language models in education in disadvantaged areas. They can improve access to diagnoses and recommendations with specialized SLM models in health in areas with limited resources. Their development is essential to support communities with fewer resources. They can accelerate the presence of the “centaur teacher or doctor” in any area of ​​the planet.

Progress in European AI regulation

On June 13, 2024, the European AI regulation was approved and will come into force in two years. During 2025, standards and evaluation regulations will be created, including ISO and IEEE standards.

Previously, in 2020, the European Commission published the first ever Assessment Checklist for Trustworthy AI (ALTAI) . This checklist includes seven requirements: human agency and oversight, technical robustness and security, data governance and privacy, transparency, diversity, non-discrimination and fairness, social and environmental well-being, and accountability. They form the basis for future European rules.

Having assessment standards is key to auditing AI systems. Let’s look at an example: what happens if a self-driving car has an accident? Who is responsible? The regulatory framework will address questions like these .

A goal to mention

Dario Amodei, CEO of Anthropic, in his essay entitled Machines of Loving Grace ( October 2024), sets out the vision of big tech companies: “I think it’s critical to have a truly inspiring vision of the future, not just a plan to put out fires.”

There are contrasting views from other, more critical thinkers. For example, that represented by Yuval Noah Harari and discussed in his book Nexus .

That is why we need regulation. This provides the necessary balance for the development of reliable and responsible AI and to be able to advance in the great challenges for the good of humanity highlighted by Amodei. And along with them, to have the necessary governance mechanisms as a “firefighting plan”.

Author Bio: Francisco Herrera Triguero is Professor of Computer Science and Artificial Intelligence at the University of Granada

Tags: