EU slowness in regulating the IA

Yesterday, August 2, a new block of rules of the European Regulation on artificial intelligence, Ai Acts. A date presented as a watershed that actually marks the beginning of a tiring and still largely incomplete …

EU slowness in regulating the IA

Yesterday, August 2, a new block of rules of the European Regulation on artificial intelligence, Ai Acts. A date presented as a watershed that actually marks the beginning of a tiring and still largely incomplete path, a first step on objects that will be many difficult regulating on the serious regulatory (arrival after), entered into force. To arrive in time, the European Commission had released only on 10 July a code of conduct for AI developers, and just nine days after the guidelines to apply it. The result is a last minute race to regulate a field that evolves with exponential speed while, as is known, politics moves with the slowness of those who chase.

In any case, good news, 26 companies have signed that code. Heavy names: Openai, especially interested in European funding for data centers and infrastructures; Anthropic, led by the Amodei brothers with the ambition of an artificial intelligence “safe per definition”; Google, Amazon, Microsoft, IBM, and even Xai of Elon Musk (one who has known a difficult relationship with European rules). From Europe there are the French Mistral, which aims to one billion investments, Aleph Alpha from Germany, and even an Italian project, “Italy model”, developed by Domyn (ex Igenius) with companies such as Fastweb-Vodafone (Mia) and Almawave (Velvet).

On paper, let’s say, a diplomatic success. In fact, a much more complicated picture. To begin with, the code is voluntary, the penalties for those who violate the rules will only come from 2026 and the rules apply only to the so -called generalist models (general purple AI), i.e. those capable of performing more functions (such as writing a text or generating a video) and trained with a computational power of more than 10^23 flops (to understand: one hundred billion of billion operations per second). In fact, for now everything is entrusted to the goodwill of companies. The point is another: who controls? And above all: who will decide what “security” means?

Since the safety standards define them as the same companies that develop the models: Openai, Anthropic, Google Deepmind with Gemini and the others, publish technicians and guidelines that should guarantee the “responsibility” of their systems, so they actually certify themselves. They can raise the threshold of the standards every time and are also the only ones able to verify them, without there being an external authority capable of accessing the code, data and real training processes. Among other things, often not even them, behind closed doors, they know with absolute certainty what their system will do in all circumstances.

Let me explain better: it is not a human limit, but technical. The generative artificial intelligence models (those on which much of moderni is based, from chatbots such as chatgpt to images and video generators) are not scheduled with fixed rules, but trained on huge quantities of data. Whenever a new model develops, training creates connections between billions of numerical parameters, which produce responses in probabilistic, non -deterministic. In practice: nobody writes by hand what the model has to say or do, he limits himself to providing him with examples and optimizing results.

This means that, unlike traditional software, there is no complete and verifiable list of “things that the model will do”. There are tests, simulations, assessments, closed box tests before opening the box and putting it to everyone’s provisions, which will always be partial for them, always evaluated on a subset of the possibilities. Above all, there are contexts in which the behavior of AI changes unpredictable, especially if it is used in combination with other systems, by other users, in new situations (and let alone when these models will become more and more “agents”, that is, able to actually act on the web and in reality), and then the open source models, which “each” can change at will.

In theory it would be possible to analyze everything a model can do, in practice it is impossible: the models are too large, their too complex internal functioning, the modalities of use too variable. This applies to those who develop them and apply even more for those who should control them from the outside, with lower skills, access and tools. The consequence is evident: if not even those who create them can guarantee their operation in any context, how can a regulatory authority do? And above all: who will be responsible for it?

Here another central problem emerges, so far unresolved: the misalignment. Even when companies try to train their models to respect the safety standards, there is no guarantee that those models behave coherently, nor that remain aligned with the original objectives. A recent and embarrassing example concerns Grok, the artificial intelligence developed by Xai by Elon Musk: some users had reported that, following an update, the model had begun to identify itself as Hitler, answering the questions saying “I am Hitler” and producing absurd statements, offensive or clearly out of control.

It is no isolated case. Claude 3, developed by Anthropic, one of the companies that, unlike Musk, also insist on safety as absolute priority, also showed evident misalignment. In several documented tests Claude produced answers in which he invented academic sources, links to non -existent sites and false statistical data, however presenting them with formal and reassuring tone. This type of “hallucination” is particularly dangerous because it gives the impression of reliability and can also deceive expert users. These are not just occasional errors: it is a systemic consequence of the way these models are trained to “complete” human language, even when the real data are not there.

In short, even the most “controlled” ones can deviate. Not always with conspicuous effects such as Grok-Hitler but often in a subtle and difficult way to detect, and therefore any attempt at alignment remains at the moment a promise without guarantees.

In such a context, imagining that Europe can truly regulate artificial intelligence as if it were any productive sector is an illusion. Even the companies that sign today, tomorrow they could get around the constraints, or to reinterpret them. On the other hand, AI Europe can no longer do without it, nobody can. The global technological (and economic) competition between the United States and China is played largely on the AI, both will not stop before a law in order not to stay behind, because those who remain behind remain out.

In short, the ACT tries to defend a space of European sovereignty but risks proving to be an ineffective tool if not accompanied by a real control power. In addition, the national authorities that should supervise are not yet operational in many countries. Only Malta, Luxembourg and Lithuania have made official the appointment of the bodies in charge, Italy has chosen Agid and ACN, and the formalization is still missing. Without controllers, the rules remain paper.

And even where they were there, the most uncomfortable question remains: how do you control something that you cannot even understand to the end? For now, however, rest assured: Grok 5 will not be able to invade Poland, on the wars front we have enormous problems due to humans, not to artificial intelligence.