There is no need to be afraid of artificial intelligence: it is part of a great revolution underway, but it does not undermine people’s jobs. Today the challenge at a global level is not AI against human beings, but between human beings who know how to design, implement and govern it against human beings who are not capable of doing so.” This is explained by Mario Nobile, general director of AgId, the agency for Digital Italy. It is his, and his work team’s, responsibility to coordinate and promote the national strategy for artificial intelligence.
On 10 October, the new Italian law on AI came into force which regulates its use and enhances “made in Italy” skills. The law respects constitutional principles and fundamental rights and follows the European AI Act: our country is the first in Europe to adopt a national law.
Director, what distinguishes the new Italian law from the European AI Act?
«The AI Act is essentially a law that prohibits certain practices because they are contrary to European values. I am thinking of social scoring, for example, which is the evaluation of an individual’s social behavior established by artificial intelligence through algorithms, but also of predictive capabilities in terms of a person’s propensity to commit crimes. On other aspects, however, the AI Act invites countries to apply the risk assessment criterion. We engineers are taught that risk is a combination of several factors: vulnerability due to exposed value due to danger.”
Let’s take a concrete example accessible to all of us.
«The example is the seismic one.
Seismic “hazard” depends on a natural factor: the San Andreas fault in California or the Sahara desert have substantial differences in natural terms. “Vulnerability” is the resistance capacity of the building, whether made of clay or reinforced concrete. The “exposed value” is whether that building houses a hospital or is empty.
By multiplying these three factors, I obtain a high, medium, or low risk. The message from the AI Act is that an evaluation is needed.”
The Italian law accelerates the adoption of artificial intelligence in all fields and proposes an anthropocentric vision of the system: man at the center.
What does it mean?
«That man is not supplanted by the machine, but governs it. We put humans at the center because AI, to date, is not and cannot be completely autonomous. I’ll give an example based on a very recent paper, the result of a German think tank in which we participated with colleagues from Ukraine and the United Arab Emirates: we took up the definition of ‘autonomous driving’ from the SAE, the American Society of Automotive Engineers. Imagine various levels of autonomy, from 0 of the old small car with steering wheel and manual gearbox up to level 5, completely autonomous. Although there are experiments, level 5 does not currently exist, neither for driving cars nor in other AI applications. Therefore Italian law provides for the human being present from level 0 to level 4. Level 5 is a topic to be discussed today because it brings up great ethical and philosophical issues, the use of technology to overcome certain human limits: but it isn’t there.”
The law underlines that the use of artificial intelligence must never be discriminatory and takes into account gender equality.
«I am optimistic and I see room for improvement: these systems will reduce the discrimination that we encounter in the real world today. A simple example of discrimination is linked to knowledge of the English language in our country. Think of the revolution when the website of an Italian municipality or company, with an operator who speaks Italian, can communicate in all the languages of the world thanks to AI. And again, the disabled: AI will obviously help the inclusion of those with permanent disabilities, but also of temporary and situational disabled people. If the ophthalmologist puts drops in my eye, I need a tool that helps me read a website or app at that moment.”
And non-discrimination between men and women at work? Will AI close the current gap?
«It is a topic that is particularly close to our hearts because AgID was the first public administration to be certified for gender equality, so culturally we emphasize good practices to reduce it. But this is about the training that has taken place so far for Large Language Models. The good ones say garbage in, garbage out. If you put garbage inside a model, you will get garbage. I challenge you: how much are we entrusting this artificial intelligence with the negativities or distortions we have as human beings? If we took certain cultural, journalistic and social messages for granted it wouldn’t be good.”
He touched on the big point of training AI models. One of the Agency’s tasks is to train staff and provide guidance in both the public and private sectors. In large companies everything is interconnected: all it takes is for an employee who has access to the system to not be trained and the whole process goes haywire.
«We don’t want 59 million Italians who are prompt engineers, that is, engineers who train chatbots, for example. A few of those are enough and thank God we have them. The famous Caio, the Chief Artificial Intelligence Officer of companies, does not have to be a technologist, but a process expert, because the new tools improve precisely those. Then in chain, as you rightly say, people must be trained and trained.
But this is a human issue, because unfortunately not in all human activities training is 100% effective.”
Among the purposes of the law are the development of medical research, prevention and the production of new drugs.
Health is what is most valuable, but it brings with it the sensitive issue of data. How do you get out of it?
«In Italy we have four lead regions that have obtained funding for the REG4AI project, Regions for artificial intelligence.
They are Liguria for healthcare, Lombardy for environment and transport, Puglia for public administration and Tuscany for emergency governance.
With our colleagues from Liguria we are working on how to reduce waiting lists. Follow this example: There are 10 people booked for a CT scan for tomorrow. The most serious thing that can happen for the efficiency of the system is that one of them does not show up.
He won’t do it on purpose, he must have had an impossibility. Well, if we were able to use not only health data, we could overbook: that is, book more places than there are in order to possibly exchange them. We might know that Carla takes the bus to go to the hospital because she lives far away and that Mario goes there on foot instead.
The AI would allow us to warn Carla: rain is forecast tomorrow and there is a transport strike, most likely you won’t be able to come to the hospital. You have two alternatives: either you come by another means, or you have the CT scan the day after tomorrow. This would also allow you to say to Mario: Get ready because tomorrow, probably between 3 and 4 pm, you will have the CT scan.
This means using not only health data to improve a collective good, which is the maximum use of the CT machine.”
A tip for young people: how to approach artificial intelligence intelligently?
«Critical thinking, critical thinking, critical thinking: I repeat it three times.
We have trained and passionate young people who compete with “hungrier” peers from other parts of the world, who open these instruments, dismantle them, reassemble them. My invitation is not to be afraid of the new: get your hands on it, otherwise you won’t win the competition. But I am confident, our young people will know how to use their heads.”