Biases and manipulation lead the dark side of AI

Share:

Artificial intelligence (AI) is already impacting multiple areas of our daily lives, from voice recognition on our phones to data analysis in medical research. Its advanced technology offers countless advantages, such as task automation, efficiency, the ability to process large volumes of data, and the personalization of multiple services.

However, as the use and development of this technology spreads, seemingly limitless, a less positive side emerges, stemming from the very characteristics that make AI unique: decision-making and its ability to execute them autonomously. When a human makes decisions, they typically surround them with an ethical context (or, at least, they have the ability to discern between the good and bad in their application).

AI, as software , currently lacks that context. Therefore, it’s important that we, as humans, embrace the great benefits of this technology, applying our ethics to these processes as appropriate.

But let’s briefly look at some of the main current risks, and why more and more voices are calling for stricter regulation in the face of sometimes lax ethics.

Ethics for machines

There was much talk about the case of an Amazon algorithm that, for a time, acted in a discriminatory manner when selecting profiles for a software developer position . It seems to have only admitted young, white men, automatically ruling out the rest. This incident made headlines in the newspapers at the time , highlighting the threat that the use of AI could pose in certain processes, such as human resources selection. This was just the beginning of awareness of some of the risks associated with AI and algorithm biases.

AI systems learn from data. And if that data reflects existing biases in society, there is a risk of perpetuating or even exacerbating those biases, especially without proper oversight. This issue has already been raised several times in procurement, lending, and judicial systems.

In 2019, for example, it emerged that Apple Card’s credit algorithm , managed by Goldman Sachs, assigned significantly lower credit limits to women compared to men, even if they shared the same income and credit profiles.

When it comes to combating these challenges, AI is no different from other high-impact technologies, such as the automobile or the internet. These also required regulation to prevent negative consequences. In this case, and given its rapid evolution, dynamic and adaptive regulation is undoubtedly required.

On the other hand, the question of liability arises in the event of errors or accidents caused by AI systems. Without a clear legal framework, determining who is responsible can be challenging. A clear example is the autonomous car. Who would be at fault if it hit a pedestrian? Although this question has been widely debated internationally, there is still no completely unanimous answer, beyond talk of a “shared responsibility” between the company that develops the car (and its AI software ) and, of course, the driver.

Regulation in diapers

In most countries, AI laws are in their early stages. In the United States, for example, regulation has been more sectoral and largely depends on individual states, although some federal frameworks exist in specific areas, such as privacy and discrimination.

From a global perspective, the challenge lies in balancing innovation with citizen protection. The Digital Rights Charter launched by the Spanish government in 2021 is an example of this pending task of citizen awareness and prevention.

However, it still needs to be implemented to be truly effective, and this is no easy task. Excessive regulation could stifle innovation, while a lack of regulation and concrete, practical measures could leave people unprotected in many ways.

Along these lines, the European Union approved the Artificial Intelligence Act in August 2024, in a joint effort between European regulatory bodies, businesses, AI experts, and civil society. Its objective: to protect people’s fundamental rights, ensure transparency in decision-making by AI systems, and establish appropriate accountability and human oversight mechanisms, among other key issues.

They do not intend to regulate the technology itself, as this would pose a problem for its implementation and development in EU industry, but rather specific cases that may pose a risk. These include prohibited uses and developments (such as social scoring of the population), high-risk uses (those whose implementation may affect a person’s fundamental rights, such as the use of AI for job placement), medium-risk uses (transparency obligations, as in the case of chatbots ), and low-risk uses (risk-free automation, such as spam filters ).

Manipulation, one of the main real risks

One of the clearest phenomena that is already beginning to emerge is manipulation, that tendency to be carried away by ideas that we ourselves have not consolidated, even unconsciously.

Employment, primarily through recommendation systems, is beginning to impact the decision-making capacity of increasingly passive consumers. But, in a few words, it’s worth considering who is “truly to blame” for a situation that could lead to a decline in current democracy. Is it artificial intelligence, as a technology? Are machines that “make decisions” for us and encourage us to consume deliberately?

Software without intention or responsibility

First, AI systems are software , that is, computer programs, with specific characteristics. Therefore, they have no responsibility or intention. However, the companies or organizations—both public and private—that use or develop them do. Clarifying this issue is key to stopping human responsibilities regarding the use of any technology from being personified in AI.

On the other hand, we have the question of our society’s real readiness to cope with the impact of AI. Although its use has become more democratized since the release of ChatGPT, general knowledge about its impact on our day is still very vague.

It is necessary for public authorities, businesses, and educational institutions to be involved and work together on a cultural and educational revolution that fosters awareness of the impact of AI and a critical mindset. Otherwise, the era of artificial intelligence could become one of unconscious mass manipulation.

In short, AI presents immense transformative potential. However, like any powerful tool, it brings both opportunities and dangers. Although efforts are underway to develop robust regulation, there is still a long way to go to ensure its safe and ethical use in our society.

Of course, we must remember that the danger is not AI itself, but rather the way we, humans, use it. The key lies in raising public and professional awareness of the risks. On the other hand, although today it is not an option to refuse to incorporate this technology into our lives, its ethical and responsible application and development must be encouraged from the very beginning, that is, from the initial conception of the AI ​​project. Furthermore, it must be monitored throughout its entire life cycle. Only in this way will we achieve a positive impact from this revolutionary technology.

Author Bio: Idoia Salazar is a Specialist in Ethics and Artificial Intelligence at CEU San Pablo University

Tags: