In the age of plausibility that we are living in, where it is not important whether something is true or false as long as it resembles a truth, artificial intelligence represents the classic double-edged sword: it can help us distinguish true news from one of the many fake news or it can confuse our minds even more, making a well-known TV face or, even worse, a politician say something shocking. This is why Palazzo Chigi is trying to set some limits, especially in the field of information, on the risks of vote manipulation and so-called deepfakes, which after the approval of the law will have to bear “an identifying element or sign, even in watermark as long as easily visible, with the acronym Ai» and must respect the copyright rules, including credit to the starting sources (the Communications Guarantor will supervise).
We took a look at the draft of 25 chapters of the bill on artificial intelligence updated on April 8 at 2.30 pm, which provides for an initial allocation of around 150 million euros linked to the Pnrr, of which 89.1 million for dedicated funds to emerging technologies, such as artificial intelligence, quantum computing and cybersecurity, another 44.7 million euros in the telecommunications sector (5G, mobile edge computing and web3). Palazzo Chigi could also participate in the dedicated venture capital funds through the National Innovation Fund on venture capital opened in Cassa Depositi e Prestiti, the safe for postal savings. It is an interesting document, especially on the aspect of manipulation and propaganda, because it provides a sort of anti-deepfake stamp and because, for the first time, it is theorized that the use of artificial intelligence to commit crimes could become an aggravating circumstance. The declared objective of the law is ambitious: to promote a «concrete, transparent and responsible use of artificial intelligence» in an anthropocentric dimension, establishing principles regarding «research, experimentation, development, adoption and application of artificial intelligence systems and models » and monitor the «economic and social risks of AI» and their impact on fundamental rights. Beyond the tandem of subjects who will have to supervise (the digitalisation body Agid, which is responsible for monitoring, and the National Cybersecurity Agency ACN, which will instead carry out inspections and impose sanctions in case of violations, both designated as a national authority (i), with the general directors Mario Nobile and Bruno Frattasi involved in the coordination committee together with the head of the department for Digital Transformation of the Presidency of the Council, the key point is that «artificial intelligence systems and models must be developed and applied with respect for the autonomy and decision-making power of man, the prevention of damage, knowability and explainability”. In short, AI must not interfere or undermine democratic life and institutions.
The artificial intelligence market, the availability of data and their access for commercial or scientific purposes must also be as “innovative, fair, open and competitive” as possible. The companies and professionals who use it will have to declare it to their customers, access to artificial intelligence technologies for children under 14 (there are those who are pushing to raise the age to 18) will be limited and “the consent of those who exercises parental responsibility”, subsequently you will be able to have access as long as the tool has transparent and understandable rules.
The central issue is algorithms and their application in disparate fields, from healthcare to justice, from taxation to employment. While in the case of justice it will be possible to use algorithms to evaluate jurisprudential orientations, it will not be lawful to use them in the case of dismissals, tasks and career paths. Indeed, the loss of jobs will be monitored and countered by a Foundation including Palazzo Chigi, Mef and the Ministry of University and Research. Nor will AI be the one to “select and influence access to healthcare services with discriminatory criteria”, and if some public body uses an algorithm it will be exclusively “for instrumental and support purposes”, certainly not for waiting lists or other. And patients will need to be informed.
The tax chapter confirms what was anticipated months ago by the Giornale: artificial intelligence will have its say on tax assessments, collection and controls, it will be used to analyze the risk of evasion but also to simplify obligations and improve taxpayers' rights, always with supervision of human personnel. In fact, “the results of algorithmic processing” will not be enough to proceed with sanctions, but “other circumstantial elements” will be needed.
The entire last part, however, is intended for changes to the criminal and civil code and to the rules for the protection of users and copyright. The executive has in mind a crackdown against the abuse and manipulation of information through artificial intelligence. It will be an aggravating circumstance in the case of the crime of impersonation (“The penalty is imprisonment from one to three years if the act is committed through the use of artificial intelligence systems”), for the crime of fraudulent raising and lowering of prices on the market or on the stock exchange, for the dissemination of sexually explicit videos or in the case of computer fraud, stock trading or market manipulation. The aggravating circumstance is also foreseen in case of abuse of the AI for money laundering crimes.
Finally, the government wants to reformulate article 612-quater of the penal code as follows, which already punishes those who artificially manipulate the photos of others: «Whoever causes unjust damage to others, by sending, delivering, transferring, publishing or otherwise disseminating images or videos of people or things or voices or sounds wholly or partly false, generated or manipulated or altered, even partially, in any form or manner through the use of artificial intelligence systems” or “capable of presenting data as real , facts and information that are not”, misleading those who look at them, “is punished with imprisonment from one to five years”.