The simultaneous translation of Google, science fiction that has become reality

So, let’s face it: Google has just pulled out the function that we all dreamed of and that until yesterday still seemed to be stuff from Serie B science fiction films. It is …

The simultaneous translation of Google, science fiction that has become reality


So, let’s face it: Google has just pulled out the function that we all dreamed of and that until yesterday still seemed to be stuff from Serie B science fiction films. It is called the vocal translation in almost real time on Meet (the “almost” is really less than a second): the interlocutor’s voice is translated into your language while speaking, with a tone, emotion and rhythm of the real person, not the usual robotic voice, tongue.

On the net you will find enthusiastic previews from those who tried it (soon we will all have it) and the tests of the users are incredible: the latency is minimal and the conversation remains fluid, I can’t wait to use it (to talk to whom? Boh I will think then). In short, we are entering the era in which to talk to anyone in any language, live, it becomes normal.

What about Apple? Eh. Here the speech is interesting, and I have to restore what I wrote yesterday, confident and Apple Addicted as I am, on iOS 26.

Let me explain: in Europe many of the new functions that Apple is launching in America (like the mirroring of the iPhone on the Mac, some parts of Apple Intelligence, and more) will not come from us (for now) and it is not because Europe prohibits them. It is because Apple prefers not to launch them to the conditions requested here due to the Digital Markets Act (DMA): it is the new European law that requires digital giants to open their fenced gardens a little. You can no longer do key functions that only work inside your ecosystem, without competition and without interoperability.

Apple, for example, should make some mirroring or compatible functions also with open standards or at least accessible to third parties. In the United States, however, he does not have these problems: there is not there, and therefore Cupertino can release everything as he wants, inside his perfectly closed fence. On the other hand, it should modify the functions to respect European law. So what does it do? Quite simply: it doesn’t really release them.
Officially he says he does it to protect the privacy and the safety of users, that is, he does not want to lose control over the system, which I understand and I have defended by principle. Except that the competition is becoming ruthless.

The point is that while Google, who also has to respect the DMA, adapts and meanwhile makes you speak with the world in real time, Apple risks going from the company that “takes care of the user experience” to the company that “takes care of its control experience” and is a real risk. Apple has always been a teacher in selling its delays as quality choices (“we do not chase the fashions”), only that in the last two years everything is changing at the speed of light, and when you find yourself saying “first safety” while the others already offer vocal translations from espionage films the perception that is given that of being left behind, not to be wiser.

Because in a while the last of users will also notice that being “safe” but silent, in a world where everyone speaks with everyone, is no longer enough. Apple, for now, makes you talk … with yourself, perfect for meditation, less for a video conference.

What about security? There is all right not too many stories (sometimes Apple’s security is so exaggerated that in order not to let anyone in your device does not do

Enter you either, I also told it here). The problem is that in Europe, between those who have to open and those who do not want to open, in the end the microphone also remains closed, what do you want you to care about the new Liquid Glass effect icons.