Meta trains its Ai in Europe with users’ posts to make it more “local”

Therefore: Meta finally has the green light, and will begin to train its artificial intelligence in the EU using the interactions with the AI ​​EI public content shared by adult users on Facebook …

Don't you want a destination on whatsapp? Here's how to silence the new function


Therefore: Meta finally has the green light, and will begin to train its artificial intelligence in the EU using the interactions with the AI ​​EI public content shared by adult users on Facebook and Instagram. Is it good? Is it bad? It is probably inevitable, and I think it is as good as it is an evil, I arrive shortly. Zuckerberg declares that the goal is to create “more European” models, capable of better understanding languages, cultural references and local sensitivity, and it is there. The users, points out, can oppose the use of their data (a very simple and clear email will be sent, guarantees zuck).

It must be said that in the United States this process has already been underway for some time, while in Europe we are cautious in regulations (also because we Europeans do not have one of our AI, we can only put rules).
On the other hand, Grok also, the AI ​​of Elon Musk integrated in X nourishes daily users of users (reason why Musk has bought himself by himself, incorporating x inside x ai) and the principle is the same: if it is written online, it can end up in the brain of the machine, it serves to better train the AI. I would not make a speech for privacy protection (as many do, the same ones who post every moment on Instagram): after all we talk about public content, and we already know that we grant them in exchange for a service (otherwise whatsapp with the cabbage that would be free).

I am more perplexed on performing the machine learning on the content posted by users, will it do well to the AI? Because training an artificial intelligence on social media means training it not only on language and culture (making it closer to us), but also on confusion, buffaloes, obsessions and paranoia that circulate freely every day. Already those who normally use the Ai with a subscription realize how many times they have to check the information: the machine learning does not distinguish the truth from the false. If something is popular, if it is repeated and becomes viral, the Ai records it as a relevant: it applies to what it finds on the web, let alone on social media. If there is no upstream filter, the algorithm digests it and proposes it again.

Inhomna in theory, these will “learn to speak like us”, in practice they risk learning to “believe” that the earth is flat or that the vaccines cause autism, because on social media they say it in

many. I hope that the Ai who learn from users do not end many users. However, Zuckerberg will have thought about it well, also because he is in competition with all the others Ai, not he will want to make his stupid.