五角大廈與 Anthropic 就人工智慧軍事用途產生分歧
Pentagon and Anthropic clash over AI military usage
美國國防部與人工智慧公司 Anthropic 之間爆發了高風險的衝突。
A high-stakes conflict has emerged between the U.S.
這場爭議的核心在於 Anthropic 為其人工智慧模型 Claude 所設定的道德界線。
Department of Defense and the AI company Anthropic.
然而,五角大廈否決了這些限制,堅持軍事承包商必須允許人工智慧用於所有合法用途,且不得受到私營企業的干預。
The company explicitly prohibited its technology from being used for mass domestic surveillance or for powering fully autonomous weapons systems.
當國防部長 Pete Hegseth 發出最後通牒,並最終將 Anthropic 指定為「國家安全的供應鏈風險」時,情勢隨之升溫。
However, the Pentagon rejected these restrictions, insisting that military contractors must allow AI to be used for all lawful purposes without interference from private firms.
儘管 Anthropic 已將其人工智慧整合至多項軍事行動中,政府仍下令逐步淘汰該技術。
This rare move, usually reserved for foreign adversaries, prevents federal agencies from doing business with the company.
對此,Anthropic 已採取法律行動,聲稱此指定為非法的報復行為。
While Anthropic has already integrated its AI into various military operations, the administration has ordered a phase-out of the technology.
這場衝突凸顯了矽谷與軍方之間關係的重大轉變。
In response, Anthropic has launched legal action, claiming the designation is illegal retaliation.
隨著五角大廈致力於追求「人工智慧主導地位」,它明確表示要求對國防使用的工具擁有單方面的控制權,這挑戰了私營技術開發商的獨立性。
This clash underscores a significant shift in the relationship between Silicon Valley and the military.
