Pentagon and Anthropic clash over AI military usage
Pentagon and Anthropic clash over AI military usage
A high-stakes conflict has emerged between the U.S.
Department of Defense and the AI company Anthropic.
The core of this disagreement lies in the ethical boundaries Anthropic set for its AI model, Claude.
However, the Pentagon rejected these restrictions, insisting that military contractors must allow AI to be used for all lawful purposes without interference from private firms.
The situation escalated when Defense Secretary Pete Hegseth issued an ultimatum, eventually designating Anthropic a 'supply chain risk to national security.'
While Anthropic has already integrated its AI into various military operations, the administration has ordered a phase-out of the technology.
In response, Anthropic has launched legal action, claiming the designation is illegal retaliation.
This clash underscores a significant shift in the relationship between Silicon Valley and the military.
As the Pentagon strives for 'AI dominance,' it is making clear that it demands unilateral control over the tools used in national defense, challenging the independence of private technology developers.
