14% of biomedical studies were written with the AI

Science has long through a difficult period. In fact, more and more often the research and studies published in the magazines “peer reviewed” (therefore overhauled by colleagues from the same sector) prove to be unreliable. …

14% of biomedical studies were written with the AI

Science has long through a difficult period. In fact, more and more often the research and studies published in the magazines “peer reviewed” (therefore overhauled by colleagues from the same sector) prove to be unreliable. Not “reproducible”. And therefore unsuitable to feed the castle of shared knowledge to which the scientific community works. The causes are many: a too competitive academic environment, which rewards those who publish as much as possible, even at the expense of quality (is called “Publish or Perish”, publishes or die); The flourishing of Predatory Journal interested in making cash, without guaranteeing the reliability of the articles they host; A number never so large of researchers active throughout the globe, which move in a world regulated by customs born when the scientific community was a much more narrow and exclusive club.

The scientific “revolution” of Gemini and Chatgpt

Problems known for some time and still without a solution. To which there is no lack of adding new ones: the arrival of artificial intelligence (AI), now indiscriminately used also in the production of scientific articles, without anyone having to know how much, of what is published, be the result of research and experiments, and how instead the flour of her bag. That is, artificial intelligence.

That the problem was destined to present itself, on the other hand, was quite obvious. Generative such as Gemini or Chatgpt are revolutionizing the production of textual and visual content. And in every field where they find space, from literature to journalism, to school, to illustration, there is the risk that they are used incorrectly. Without their use being declared. Or to fully produce novels, articles or responses to exams, leaving human beings only the effort to take the merit of it.

The analysis of the words that unmasked the makeup

If it was clear that scientists also started using artificial intelligence for the production of their articles, and that this would necessarily have created some problems, until today there were no precise estimates of how widespread this practice was. To solve the mystery, a group of researchers from the University of Tinbinga, Germany, with an analysis recently published in the journal Science Advances, made with a method different from the previous attempts made previously: instead of looking for a way to recognize the texts generated by the AI, the authors have limited themselves to checking how the lexicon changed, that is, the set of words used in the ABSTRACT published in the biomedical field after The launch of Chatgpt, which took place in November 2022.

And identified a list of 454 words that seem to have become fashionable only in the last two years, have used them as an indication of an involvement of the AI in the drafting of the texts. They are – they explain – of not connected terms with the content of the research, but purely with the form of the text, as “unparalleled” (unparalleled in English), or “invaluable” (invaluable), the diffusion of which is therefore linked to the stylistic choices of the linguistic models, and not to an increasing interest for some theme or a vein of research by the scientific community.

Race to publish research: 200 thousand writings with the AI

With this method, they therefore examined all the abstracts of the biomedical research published last year, to check how many show unequivocal signs of a use of chatgpt and company. Result: on one and a half million research analyzed, 200 thousand, about a seventh of the total, seem to have been written with artificial intelligence. The percentage – warn the authors of the research – has grown over the period of the period studied, a symptom that the phenomenon is being made more common from month to month. And it could also be underestimated, because researchers would be increasingly skilled to hide the textual clues left by artificial intelligence.

Obviously, the research is unable to tell us that I use it has been made of the AI in each of the identified papers. And the problem is precisely this: artificial intelligences are increasingly used and there is no way to know what you do. There are certainly legitimate uses, such as helping to translate the texts into English, in the case of non -native speakers, or to clean up and correct the texts of refusal and grammatical errors. But there are also incorrect and potentially harmful practices, such as making large parts of the texts to AI writes without the necessary supervision, with the risk of introducing errors and inaccuracies. If not, even, to invent the search for the search to throw it in the pile, a practice that unfortunately is revealing less rare than one might, or we would like, think.

Read the Sciences section of Toray.it