Meta will identify AI-generated images on its social networks

Half will identify and label AI-generated images on its social networks (Facebook, Instagram and Threads) so it can report them to users. It was communicated Nick Clegghead of Meta’s international affairs, through a note shared …

Artificial intelligence?  Think Italian - ilGiornale.it

Half will identify and label AI-generated images on its social networks (Facebook, Instagram and Threads) so it can report them to users. It was communicated Nick Clegghead of Meta’s international affairs, through a note shared on the company’s blog.

The images created with the software developed by Meta already have within them a “watermark” which allows us to understand that they were created with this technology. The company has now decided that, in order to give greater guarantees of transparency to its users, it will also work to label those created using rival platforms: OpenAI, Google, Midjourney, etc.

This choice – explained Nick Clegg – was made to satisfy the needs of users: “As the difference between human and synthetic content grows ever thinner, people want to know where the line lies“. And then: “Our users have let us know they appreciate transparency when it comes to AI, so it’s important to help them understand when the content they see was created with it“.

Meta: When will AI-created images be labeled?

It is still too early to know for sure when Meta will be able to label images created using other companies’ software. For the moment – according to the information available – we know that the company is working on it and that the project is under construction. The labels that will be created for the images – explained Clegg – will be available in all languages ​​supported by the platformsso as not to exclude any user.

Clegg said the company wants to be quick in rolling out this solution, as 2024 will be a year full of elections all over the world. Already last year, there were numerous deepfakes involving politicians and events of public interest, and their spread during elections could be harmful.

The manager then explained that, for the moment, the planned labels will only concern images but that the company will also give users who share audio or video files created with artificial intelligence the opportunity to report them as such, in order to do not mislead users.

Finally, Meta has made it known that it is studying to develop technology capable of automatically detect AI-generated contenteven if these do not have invisible labels.