What if Meta paid us to use our photos and data from Instagram and Facebook to train the AI?


Just a few days ago Meta presented a new update of its large language model called Meta Llama 3, which would be the basis of Meta AI, the artificial intelligence that Mark Zuckerberg’s company wants to deploy on Facebook, Instagram and WhatsApp at the level global. This action, as they have proposed it, is not illegal.

In the presentation, Meta highlighted the work it is doing to train its model with a wide variety of data to improve the quality of Meta AI ‘s response and interaction . One of the objectives is for AI to generate its own content that reaches our social networks, is credible and adapted to what we like most.

This personal assistant is now available in the United States, Australia and Canada and, in addition to conversational AI, it works as generative AI to create content. The process is unstoppable.

Immediately afterwards, the parent company of Facebook and Instagram announced that it will use photographs and other content published by users on its platforms to train its generative artificial intelligence.

The update to its privacy policy will come into effect on June 26, 2024. This will allow the company to use posts, photos, videos and other content shared by users on its platforms to train its artificial intelligence.

As expected, the implementation of this policy has generated great controversy and concern among users, especially among artists and content creators who fear for privacy and unauthorized use of their works.

The company has provided an option for users to opt out of the use of their data through a specific form, although not without criticism. The process has been criticized as complicated and inaccessible.

Other networks and technologies behind this practice

The controversy intensifies at a time when the race to develop the best artificial intelligence is in full swing. This significantly impacts technology companies and their stock market price.

Last week, chipmaker Nvidia, one of Meta’s competitors, announced an increase in its fiscal first-quarter profits thanks to AI.

The year opened with The New York Times denouncing Microsoft and OpenIA for using copyrighted content from the newspaper to train their algorithms. This triggered a long debate about the data that AIs use to inform their responses.

This has led to agreements being signed for the use of works, such as that of OpenIA with the publishing groups Prisa in Spain, Le Monde in France and with the German academic publisher Axel Springer and recently with the News Corp group , owner of The Wall Street Journal , for 230 million euros.

X has already done it and TikTok too

It is the order of the day for these companies to use data they collect from the internet, with little control, to feed their AI.

The case of X (formerly Twitter) recently came to the fore, in which it was leaked that Elon Musk’s company had been using user publications to build its artificial intelligence: Grok, available for premium profiles .

In one way or another, all social networks use artificial intelligence algorithms to personalize the content we view. For example, TikTok uses this information to create a unique “For You” based on videos similar to the ones you consume the most. This has led, after the entry into force of the European Union Digital Services Law , the social network to adopt new measures that allow deactivating the use of certain information or functionalities related to data.

What happens to our data and photos

If Meta or any other company wants to use our data, they must be transparent about it in their privacy policy. This allows the user to have control over the information they use , how and for what.

With the update of the policy, from June 26, 2024, the social network will begin to use photographs, videos and texts published by its users (descriptions, comments, etc.), and those contents already published previously.

The user will now have two options: delete the content that they do not want to “give away” to the AI ​​or exercise the right to object to this use in Meta applications. For this last option, the networks have been flooded with tutorials and tips to do it. In short, fill out a fairly hidden form opposing it.

What if Meta paid for our data?

The idea is not far-fetched at all. And perhaps it would help us, as users, feel better, or at least not so bad.

In addition to agreements between AI companies and content owners (images, texts, videos, audios, etc.), valued in millions of euros, companies like WorldCoin already offer cryptocurrencies for user content. Specifically by biometrically scanning our iris and generating a unique virtual ID.

Data is a currency. Just as payment with personal data to access “free” or freemium services is already regulated , including paying a subscription for not using personal data, as occurs in X Premium, a model in which companies pay for our services could be perfectly valid. digital data. Possibly this would establish a balance between privacy and benefit. At least the user receives a piece of the pie.

Author Bios: Francisco José Pradana is Professor of Communication and director of Postgraduate Studies and María Luisa Fanjul Fernández is a Professor bot at the European University