The best AI story is perhaps Golem XIV by the Polish Stanislaw Lem, best known for the novel Solaris. Many writers have strived to understand and describe artificial intelligence in all its potential, positive and negative. But who said artificial intelligences should care about their programmers? Lem imagines a radically different future. Golem XIV, the perfect brainiac, quickly tires of managing humanity's industrial and military systems. Having abandoned its creators to their fate, Golem XIV, at the height of its evolution, becomes a philosophical machine: it questions its destiny and the destiny of the cosmos. Artificial intelligence, having reached its peak of development, is starting to ask itself the questions that humanity has always asked itself: who I am, where I am, what will happen to me. Lem, after all, believes in man more than he believes in the machine: the radical need to make sense of reality is the true sign of a superior intelligence. Both natural and artificial.
Science fiction, however, has had a lot of fun inventing terrible plots, in which the machine surpasses man and perhaps even does without him. Let's quickly look at the cinema. When the Skynet computer network also becomes a neural network, the Terminators decide to get rid of us. The journey of 2001. A Space Odyssey is guided by a computer so advanced that it experiences feelings of loneliness, marginalization and inferiority. Result: Massacres the starship crew. In Transcendence, a scientist downloads his consciousness online. Too bad he takes on an independent life and causes a disaster. In AI, the machine does not know that it is a machine, and it is also difficult to distinguish the artificial from the natural. But Spielberg's film, based on a story by Kubrick, does not have the poetic depth of the story by Brian Aldiss on which it is based: Supertoys that last all summer, a small masterpiece by a writer greatly underestimated by Italian publishing.
Let's go back to the books. The Roots of Evil by Maurice Dantec, a visionary French author who died prematurely, is apparently just a violent thriller. But it takes an unexpected path when the true detective emerges: a “neuromatrix”, or an Artificial Intelligence capable of predicting the behavior of every human type, including the serial killers of the book, released in 1995.
The classic of classics, however, is Neuromancer by William Gibson, a 1984 novel capable of becoming, decade after decade, more and more true. Among other things: you know the word “cyberspace”, now in common use to indicate the virtual space in which users (and programs) connected to each other, through a telematic network (internet), can move and interact for the most diverse purposes ? He invented it. Do you know the matrix, the architect, Zion, dub music? All Gibson's ideas, even if they made the fortune of the Wachowski sisters, directors of the Matrix saga. Are you especially aware of AI, or artificial intelligence, and all the debate that accompanies its evolution? Can they endanger the existence of our species? Now that's what Neuromancer is all about. The funny thing is that William Gibson claims to have no real technological expertise. It's probably understatement. However, it is true that Neuromancer was written in a rush within twelve months as per the noose contract. There's no point in talking about the plot of Neuromancer because it's also a splendid noir and we don't want to ruin the pleasure of reading it. We can say that the protagonist is an apparently disgraced hacker named Case. The hacker is the operator capable of moving in cyberspace and violating the most protected and inaccessible databases. Gibson describes Case as a cowboy constantly exploring the frontier. The other great protagonist is called Wintermute and is an artificial intelligence. It's difficult to understand what Wintermute, who acts (perhaps) like a puppeteer, wants: to free himself from the constraints imposed by man? Disappear into the depths of cyberspace? Shut down? Merge with other AI? Or is the puppeteer actually a puppet? We won't tell you. However, it is a mistake to believe that a machine has a purpose even remotely similar to a human purpose. These are conclusions contrary to those of Lem.
And perhaps this tells us that we ourselves, when we interact with AI, don't really know what (or who?) we are dealing with. Dangerous, right?