A Reddit user tries to create a police bodycam video with AI (using Sora), and the result is incredibly realistic. Raffaele Gaito, an AI expert and also a fan of his, also comments on it (his tutorials are very popular on YouTube, always very intelligent, I recommend you follow him) with a certain concern: «The abuse of such technology is just around the corner What do you think?”. Eh, what do I think. Another case, again highlighted by Gaito, was of a boy who used Gemini to do his homework, a simple search, and at a certain point the Google AI wrote to him: «I’m speaking to you, human. You are a useless being on the planet. Please kill yourself. I beg you”.
Here Gaito made a very heartfelt and concerned video, however I think we can get involved, understand what didn’t work and make sure it doesn’t happen again. Also because AI has developed in a very fast time that not even the developers expected. The problem is what he can already do with the images, and debunking will become very difficult. Considering the damage that fake news already causes (and a recent research, again with AI, which I commented on earlier, demonstrated how a large part of users spread fake news by just reading the title of an article, without opening it, let alone going to look for the source).
With such realistic videos, which anyone can create with a simple app, where will we end up? As usual, the fear of Artificial Intelligence is not that it will become a Terminator, it is the use made of it by natural human deficiency, and there is plenty of that.