When invisible robots influence our choices and opinions

Share:

Every time we click a star to rate a restaurant, leave a comment on a shopping site, or “like” a video, we leave a digital footprint. Individually, this may seem insignificant, a simple little sign of preference, a micro-opinion among many others. But collectively, these footprints form a vast social landscape, a cloud of visible and persistent signals that profoundly influences our behavior.


These diffuse clues, aggregated by platforms and amplified by algorithms, function like a shared memory. They tell us what is popular, trustworthy, or, conversely, suspicious. The phenomenon is so powerful that biologists and physicists have compared it to a well-known mechanism in the animal world: stigmergy . This concept, introduced in the late 1950s by entomologist Pierre-Paul Grassé to explain the collective construction of the nest in termites, describes the indirect coordination between individuals through the traces they leave in the environment. In social insects, a pellet of earth impregnated with a construction pheromone and deposited in a particular spot attracts other worker termites who will add their own, leading to the formation of a pillar and then a dome .

In the digital world, an enthusiastic comment, a series of five-star reviews, or the virality of a hashtag play a similar role: they encourage others to adopt convergent behavior. Thus, without the need for a conductor, thousands of individual actions can combine to produce coherent collective behavior. But this fascinating mechanism has a downside. For while stigmergy fosters cooperation and collective intelligence, it also opens the door to manipulation and deception. What happens when certain individuals, or automated programs, leave biased or misleading traces?

The work we conducted at the Research Center on Animal Cognition , in collaboration with the Laboratory of Theoretical Physics and the School of Economics in Toulouse, immerses us in the heart of this question, at the crossroads of ethology, behavioral economics, and complex systems science. Our experimental studies have revealed how, in controlled digital environments, humans exploit, repurpose, or are influenced by these traces. They strikingly demonstrate that even simple software robots, devoid of any sophistication, can profoundly reorient the cooperative dynamics of a human group.

When cooperation becomes fragile in the face of competition

The first series of experiments , published in 2023, aimed to examine the conditions under which stigmergy promotes or hinders cooperation between humans. To this end, we designed an experiment in which groups of five participants were asked to explore the same grid of 225 numbered squares, each containing a hidden value between 0 and 99, randomly distributed. Their objective was to find the squares with the highest values.

Each time a player discovered a square, they had to assign it a rating out of five stars, just as one would for an online product. After all participants had opened and rated the squares, each square in the grids explored by each group, initially white, took on different shades of red, the intensity of which depended on the percentage of stars that had been placed in the square by all participants during previous iterations. These traces of color were visible to all members of the group and thus constituted a collective memory of their past actions. The experiment ended after twenty iterations, and the sum of the values ​​of the squares visited by each participant during all the iterations determined their score.

Two different sets of rules were offered. In the non-competitive version, the players’ cumulative score after a series of ten experiments did not affect their final payout, which was the same for everyone: each participant earned 10 euros. In the competitive version, however, every point counted because the final payout (between 10 and 20 euros) depended on the sum of the values ​​discovered, which determined the players’ ranking. Players were therefore in competition to obtain the best reward.

The results showed that in the non-competitive condition, individuals tended to rate squares proportionally to their value, providing others with accurate and therefore useful information. Cooperation emerged spontaneously. By exploiting the traces left by each other, the group was able to collectively identify the best squares, far beyond what an isolated individual could have hoped for. But as soon as competition came into play, everything changed. Many participants began to subtly cheat, visiting high-value squares but assigning them a low rating so as not to attract the attention of others. Others adopted neutral strategies, assigning random or uniform ratings to cover their tracks. Thus, collective memory became unreliable, and cooperation eroded.

A detailed analysis of online behavior revealed three distinct profiles: collaborators, who honestly share information; neutrals, who leave ambiguous signals; and deceivers, who deliberately mislead others. In a competitive environment, the proportion of deceivers skyrockets. This shift demonstrates that human cooperation based on digital footprints is highly contextual. It can arise naturally when there is nothing to lose by sharing, but it evaporates as soon as self-interest leads to keeping information to oneself or misleading others. This ambivalence is found in many online environments, where genuine evaluations coexist with false comments, fake reviews, or organized spam.

When social robots enter the game

The second study, conducted in 2022 and currently being published, takes the experiment even further by introducing new actors: simple programmed bots (a bot is an automated software application that performs repetitive tasks on a network). In this experiment, we used the same grid to explore and the same evaluation system, but this time, each human participant played with four “partners” who were not human, even though the players were unaware of this . These partners were bots adopting predefined behaviors. Some collaborated by accurately marking the squares, others cheated systematically, still others remained neutral, and finally, one type sought to optimize collective performance. The idea was to test whether the presence of these artificial agents, however rudimentary, could influence the human participants’ strategy in a competitive situation.

The results were spectacular. In groups where the bots were cooperative, humans performed better, discovering more high-value squares and achieving higher scores. But this climate of trust also fostered opportunistic behavior; some participants began cheating more, taking advantage of the reliability of the traces left by the bots. Conversely, in groups saturated with deceptive bots, participants adapted by becoming more cooperative or neutral, as if trying to preserve a minimal amount of usable signal in a sea of ​​noise.

The influence of the bots was so strong that the mere composition of the group (four cooperative bots, or three deceivers and one cooperator, etc.) was enough to predict overall performance. Even more surprisingly, when comparing the performance of five humans playing together to that of mixed human-bot groups, the groups incorporating bots designed to optimize collective performance fared much better than the purely human groups. In these situations, the presence of the bots encouraged participants to adopt a collaborative approach even while they were competing.

Between collective intelligence and the risks of manipulation

These experiments, although conducted in a laboratory setting with grids of numbers, resonate strongly with our digital world, saturated with traces and automatic signals. In light of the findings, we can question the extent to which our collective choices are already shaped by invisible agents. These experiments show that stigmergy, this mechanism of indirect coordination, also functions in humans, and not just in termites or ants. They also reveal its fragility. The cooperation born from these traces is always threatened by the temptation of deception, amplified by competition or the presence of biased agents. In a world where online platforms rely heavily on evaluation, rating, and recommendation systems, these results call for urgent reflection. Because behind every rating and every comment may lie not only selfish human strategies, but also bots capable of skewing collective opinion.

However, it’s not just about denouncing malicious manipulation, fake reviews used to boost a product, or disinformation campaigns orchestrated by armies of bots, but also about considering the potentially prosocial uses of these same agents. As our experiences also show, well-designed bots can, on the contrary, foster cooperation, stabilize group dynamics, and even improve group performance. The key is to integrate them transparently and ethically, preventing them from becoming instruments of deception.

This research reminds us that we now live in hybrid ecosystems, where humans and artificial agents coexist and constantly interact through digital traces. Understanding how these interactions shape our collective intelligence is a major challenge for interdisciplinary research. But it is also a matter of civic responsibility, because the way we regulate, design, and use these traces and bots will determine the quality of our future collaborations and perhaps even the health of our digital democracies.

Author Bio: Guy Théraulaz is a Researcher at the CNRS at the Centre for Research on Animal Cognition, Centre for Integrative Biology at the University of Toulouse

Tags: