ChatGpt for military use? OpenAi changes its agreement with the Pentagon after the boom in uninstallations

It’s hard to believe that the boom in ChatGpt uninstallations hasn’t played its part in pushing OpenAi CEO Sam Altman to review the agreement signed with the US Department of Defense for the military use …

ChatGpt for military use? OpenAi changes its agreement with the Pentagon after the boom in uninstallations

It’s hard to believe that the boom in ChatGpt uninstallations hasn’t played its part in pushing OpenAi CEO Sam Altman to review the agreement signed with the US Department of Defense for the military use of its artificial intelligence.

The decision to close the deal “appeared opportunistic and haphazard,” Altman admitted, suggesting that public pressure worked. The CEO explains that he has worked with the War Department to introduce some “additions to the agreement, in order to make our principles absolutely clear”

A few days after the announcement of the agreement between OpenAI and the Department of Defense for the use of AI in classified military networks, the company led by Altman was forced to review and clarify the terms of the agreement. A partial backtrack, which came after the head-on clash between the Pentagon and Anthropic, and after an unprecedented boycott campaign that hit ChatGpt.

What happened: the Anthropic-Pentagon case and OpenAi’s opportunism

OpenAi ended up at the center of controversy after taking the place of rival Anthropic, which in the meantime was clashing harshly with the War Department and the American President himself, Donald Trump.

After weeks of tension, the case exploded on February 27, when Donald Trump’s administration ordered federal agencies to stop using Anthropic’s technologies. The technology startup led by Dario Amodei had refused to accept conditions that, in its opinion, could have allowed mass surveillance and the use of AI in completely autonomous weapons systems.

The Pentagon goes so far as to define Anthropic as “a risk for the supply chain”, opening a very harsh clash in which OpenAI quickly enters, announcing an agreement to bring its models within the classified network of the US Department of Defense.

A move communicated suddenly, which triggered a wave of criticism.

The #quitGpt campaign and mass uninstallations

The reaction from users was not long in coming. The #quitGpt campaign takes shape on social media, with the invitation to uninstall ChatGpt and migrate to Claude, perceived as more rigorous on an ethical level. In a few days there is talk of over a million members, guides to deleting accounts and a surge in downloads from rivals (Claude in the lead).

A strong reputational signal, which affected not only the military use of AI, but the feeling that OpenAi was abandoning its original narrative of technology “serving the common good” (benefit for all humanityis the formula also used in the company’s founding documents) without fully explaining the limits of the agreement.

No surveillance and red lines: the changes

Under this pressure, in the last few hours Altman published an internal OpenAI note on X, announcing changes and clarifications to the agreement with the Pentagon. The post essentially defines some red lines: no to the use of AI for the domestic surveillance of US citizens or residents, nor for their tracking through personal data, including those acquired commercially. It is also clarified that OpenAI services will not be used by military intelligence agencies, such as the NSA (National Security Agency), unless a future contractual change is made.

Altman underlines that OpenAi intends to collaborate with the US government only “through democratic processes” and goes so far as to state that, faced with an order deemed unconstitutional, he would prefer “to go to prison rather than carry it out”.

The omissions and the unresolved issue

However, there is an absence that weighs heavily: in the announced changes there is no explicit reference to the ban on the development or use of completely autonomous weapons, the point that had brought Anthropic into a head-on collision with the Pentagon. A gap that fuels the suspicions of those who see the revision of the agreement more as a reputational containment operation than a real change of direction.

Altman, meanwhile, also distances himself from the hard line against Anthropic, saying he has asked the Department of Defense to also offer rivals the same conditions granted to OpenAI and not designate them as a systemic risk.

“There are many things the technology simply isn’t ready for, and many areas where we don’t yet understand the trade-offs needed for security,” the CEO writes in his post. And indeed the OpenAi-Pentagon case is emblematic of the tension between national security, the AI ​​market and public consensus, making it clear how military contracts are capable of profoundly eroding trust when they are not accompanied by transparency and well-defined limits.