The 10 reasons why not trust the AI

It is written with the AI, you think with the AI, you look for the Ai, translates with the Ai, calculate with the Ai, study with the AI, you write the thesis with the AI, …

Can't you do something? Ai thinks for you

It is written with the AI, you think with the AI, you look for the Ai, translates with the Ai, calculate with the Ai, study with the AI, you write the thesis with the AI, you write a novel with the AI ​​(often read it only your Ai), you write songs with the AI, you make self -diagnosis with the AI, all with the AI. It became the new brand of authority: once it was saying “I read it on the internet” (ok, where?) Or “I found it on Google” (ok, where?), Now they say “they said it chatgpt”, feeling safe.

It is also the new form of laziness: if the Ai says it is true, if the AI ​​writes it has value. Sam Altman, CEO of Openai, who never misses an opportunity to repeat that people trust too much, knows this. Trusting too much means not understanding how it works, it means not distinguishing language from knowledge, means confusing a machine that completes phrases with a machine that thinks (although the illusion that “think” is very strong, since we also linguistic machines). Here are ten concrete reasons for which you don’t have to trust the AI.

First reason: Altman himself declares it as I said. Chatgpt hallucin, and if those who produce it admits it means that it is not a marginal detail but the very substance of the product. What is a hallucination? When he doesn’t know one thing, he invents it. How, haven’t they solved the problem with the GPT-5 model?

Second reason: no, even the newest model, GPT-5, continues to make mistakes. They are no longer the coarse errors that made GPT-3 or 4o, they are refined errors, the more camouflaged we say, just the more dangerous.

Third reason: Openii herself recommends using it as a second opinion, never as a primary source. In general, if you are an expert in a topic you will immediately understand where it is wrong, if you are not taken everything for flowing wisdom. A tip: when you use it for serious, or medical searches, and you are not a doctor, and if you are a doctor and you are not a house, or if you are not a house you are not my friend Daniele, always activate the “Deep Research” mode. For the deepest reasoning the “Thinking” mode. On serious issues, at least on that, he will tell you to contact your doctor. In the Alman system prompt here has been careful, he does not want to have deaths on consciousness.

Fourth reason: users believe it too much. There are guys who declare that they do not know how to make any decision without asking the AI ​​first, so we started talking about the phenomenon of cognitive dependence. Which, among other things, is producing a creative flattening in those who use it to produce songs, narrative texts, visual images, videos, art. On the other hand, the difference with the real artists can be seen more and more: entrusted to the AI ​​and you don’t go anywhere, well that it goes to become viral with a video on Tik Tok, although if it points to the virality already badly.

Fifth reason: hallucinations are not eliminated. They are not a bug that will be corrected with the next update, they are the functioning of the model itself. The same release of Chatgpt 5 (announced as an “atomic bomb”, the most harmless atomic bomb in the world) has made it clear how the idea of ​​an exponential LLM discount is unfounded. Indeed, many models with wider data contexts and greater processing capacity often work worse.

Sixth reason: beyond hallucinations, it does not distinguish the truth from the false. A well -written lie and correct information have the same statistical weight for AI. For heaven’s sake, using AI is better than the user who reads the opinion of Tizio and Caio on Tik Tok, but to always take with the calipers of the critical sense, if you have one.

Seventh reason: mix the registers (and content) sometimes without criterion. It can combine opposite concepts in a formally coherent and conceptually inconsistent speech and can produce texts that seem solid and instead do not mean a caz … a cabbage. Be careful.

Eighth reason: he always replies. He does not know silence, he cannot refrain from saying something, generates words even when he has nothing to say, and this continuous flow is mistaken by content. He never tells you “I don’t know”, just like the talk show commentators (a little better, it’s an extreme similarity, the concept is that though).

Ninth reason: it does not have a point of view, it bends to that of the user. It is not neutrality, it is simple complacency. A car that adapts to anything, from enthusiasm to delirium, is not reliable. Write the first thing that comes to mind and replies that it is brilliant. It makes any einstein felt heard.

Tenth reason: AI can be a useful tool for those who have a brain and use it, useless if you use it instead of your brain, deleterious if you have never used the brain.

Promising used in specific, advanced (and very expensive) models, such as in diagnostics, in general science, and also in military defense (I don’t think you have to attack someone with missile systems, although I sometimes want, and thank goodness I don’t have it available).