Pentagon and Anthropic clash over AI military usage
五角大廈與 Anthropic 就人工智慧軍事用途產生分歧
A high-stakes conflict has emerged between the U.S.
美國國防部與人工智慧公司 Anthropic 之間爆發了高風險的衝突。
Department of Defense and the AI company Anthropic.
這場爭議的核心在於 Anthropic 為其人工智慧模型 Claude 所設定的道德界線。
The core of this disagreement lies in the ethical boundaries Anthropic set for its AI model, Claude.
該公司明確禁止其技術用於大規模國內監控或驅動全自動化武器系統。
However, the Pentagon rejected these restrictions, insisting that military contractors must allow AI to be used for all lawful purposes without interference from private firms.
當國防部長 Pete Hegseth 發出最後通牒,並最終將 Anthropic 指定為「國家安全的供應鏈風險」時,情勢隨之升溫。
The situation escalated when Defense Secretary Pete Hegseth issued an ultimatum, eventually designating Anthropic a 'supply chain risk to national security.'
這種通常只針對外國敵對勢力的罕見舉措,禁止了聯邦機構與該公司進行業務往來。
While Anthropic has already integrated its AI into various military operations, the administration has ordered a phase-out of the technology.
對此,Anthropic 已採取法律行動,聲稱此指定為非法的報復行為。
In response, Anthropic has launched legal action, claiming the designation is illegal retaliation.
這場衝突凸顯了矽谷與軍方之間關係的重大轉變。
This clash underscores a significant shift in the relationship between Silicon Valley and the military.
隨著五角大廈致力於追求「人工智慧主導地位」,它明確表示要求對國防使用的工具擁有單方面的控制權,這挑戰了私營技術開發商的獨立性。
As the Pentagon strives for 'AI dominance,' it is making clear that it demands unilateral control over the tools used in national defense, challenging the independence of private technology developers.
