ASUS Launches Advanced Liquid-Cooled AI Servers
華碩推出先進液冷 AI 伺服器
As of March 2026, ASUS has officially shifted the landscape of computing by launching a series of Advanced Liquid-Cooled AI Servers.
截至2026年3月,ASUS已正式藉由推出一系列先進液冷人工智慧(ㄖㄣˊㄍㄨㄥˋㄓˋㄏㄨㄟˋ)伺服器,改寫了運算領域的版圖。
Moving away from traditional air cooling, these new systems are designed to handle the extreme power density required for next-generation AI.
這些新系統捨棄了傳統的氣冷方式,旨在應對下一代人工智慧所需的極高功率密度。
At the heart of this innovation is the ASUS AI POD, a powerhouse built on the NVIDIA Vera Rubin architecture, capable of supporting trillion-parameter LLM training with 100% liquid cooling.
這項創新的核心在於ASUS AI POD,這是一個建立在NVIDIA Vera Rubin架構上的強大動力中心,能透過100%液冷技術支援兆級參數的LLM(大型語言模型)訓練。
By utilizing methods like Direct-to-Chip cooling, ASUS enables hardware to run at peak capacity without thermal throttling.
透過採用晶片直冷(Direct-to-Chip cooling)等方法,ASUS使硬體能在不發生散熱降頻(thermal throttling)的情況下,以峰值效能運作。
A standout achievement is the system's energy efficiency; these setups have reached a Power Usage Effectiveness (PUE) of 1.18.
該系統的一項傑出成就是其能源效率;這些配置已達到1.18的電力使用效率(PUE)。
This is a critical milestone, as it proves that these 'AI Factories' can achieve massive compute density while drastically cutting operational costs and carbon emissions.
這是一個關鍵的里程碑,因為它證明了這些「AI工廠」能夠在實現大規模運算密度的同時,大幅降低營運成本與碳排放。
Through partnerships with industry leaders like Vertiv, ASUS is transforming high-performance computing from an experimental endeavor into a reliable, sustainable, and purpose-built infrastructure for the future of artificial intelligence.
透過與Vertiv等產業領導者的合作,ASUS正將高效能運算從一項實驗性嘗試,轉變為可靠、永續且專為人工智慧未來所打造的基礎設施。
