首页 News 正文

AMD launches new AI high-performance computing solution

因醉鞭名马幌
121 0 0

On October 11th, at Advancing AI 2024, AMD launched a new AI high-performance computing solution, including the fifth generation AMD EPYC server CPU, AMD Instinct MI325X accelerator, AMD Pensando Salina DPU, AMD Pensando Pollara 400 NIC, and AMD Ryzen AI PRO 300 series processor for enterprise AI PCs.
Dr. Su Zifeng, Chairman and CEO of AMD, stated, "With our new EPYC CPU, AMD Instinct GPU, and Pensando DPU, we will provide leading computing power to support our customers' most important and demanding workloads. Looking ahead, we expect the market size of data center AI accelerators to grow to $500 billion by 2028
In 2018, AMD EPYC server CPUs only had a market share of 2%. In less than 7 years, it has now reached 34%. Data centers and AI have brought huge growth opportunities for AMD.
EPYC CPU fully upgraded
As one of AMD's core products, AMD EPYC server CPUs have undergone a comprehensive upgrade.
The fifth generation AMD EPYC server CPU (AMD EPYC 9005 series CPU), codenamed "Turin", adopts the "Zen5" core architecture, is compatible with the SP5 platform, provides up to 192 cores, and has a maximum frequency of 5GHz. The AVX512 instruction set supports a complete 512 bit wide data path.
AMD states that the AMD EPYC 9005 is an advanced CPU designed for AI. Compared with traditional hardware, AMD EPYC 9005 can achieve equivalent integer computing performance while significantly reducing the number of racks, greatly reducing the physical space, power consumption, and required software license quantity, thus freeing up space for new or growing AI workloads.
AMD also stated that the AMD EPYC 9005 has outstanding AI inference performance. Compared to the previous generation product, servers running two 5th generation AMD EPYC 9965 CPUs can provide up to twice the inference throughput capability.
AMD has proven that it can meet the needs of the data center market and set a benchmark for data center performance, efficiency, solutions, and functionality that can meet the demands of cloud, enterprise, and AI workloads for customers, "said Dan McNamara, Senior Vice President and General Manager of AMD's Server Division
Instinct GPU steadily advances
As an important carrier of AI computing power, AMD Instinct GPU has also undergone updates and iterations. In addition, AMD has also announced its GPU product roadmap for 2025 and 2026.
The AMD Instinct MI325X is built on the third-generation AMD CDNA architecture, featuring 256GB of HBM3E memory and 6TB/s of memory bandwidth, delivering impressive training and inference performance and efficiency, setting a new standard for AI performance. According to data released by AMD, the AMD Instinct MI325X outperforms the Nvidia H200 in inference across multiple models.
AMD stated that the AMD Instinct MI325X is expected to be put into production and shipped in the fourth quarter of 2024, while the complete system and infrastructure solutions of partners such as Dell, Gigabyte, HP, and Lenovo will be launched from the first quarter of 2025.
In terms of future product layout, compared to accelerators based on AMD CDNA 3 architecture, the inference performance of AMD Instinct MI350 based on AMD CDNA 4 architecture will be improved by 35 times. Meanwhile, the AMD Instinct MI350 can be equipped with up to 288GB of HBM3E memory and is expected to be launched in the second half of 2025.
AMD also announced significant progress in the development of the AMD Instinct MI400 based on the AMD CDNA Next architecture, with plans to launch it in 2026.
Improve the performance of AI networks
Currently, AI networks are crucial for ensuring effective utilization of CPUs and accelerators in AI infrastructure.
To support the next generation of AI networks, AMD is utilizing widely deployed programmable DPUs to provide support for ultra large scale computing. The AI network can be divided into two parts: the front-end that transmits data and information to the AI cluster, and the back-end that manages data transmission between the accelerator and the cluster. To this end, AMD has launched the AMD Pensando Salina DPU for front-end and the AMD Pensando Pollara 400 for back-end.
The AMD Pensando Salina DPU is one of the world's highest performing and most programmable third-generation DPUs, with twice the performance, bandwidth, and scale compared to its predecessor. The AMD Pensando Salina DPU supports a throughput of 400G and enables high-speed data transmission. It is a key component in AI front-end networks, optimizing performance, efficiency, security, and scalability for data-driven AI applications.
The Pensando Pollara 400 is the industry's first UEC ready AI NIC (an AI network card that complies with the Super Ethernet Alliance specifications), which supports next-generation RDMA software and an open network ecosystem, ensuring performance leading, scalable, and efficient communication between accelerators in the backend network.
In terms of launch time, AMD Pensando Salina DPU and AMD Pensando Pollara 400 will both provide samples to customers in the fourth quarter of 2024 and are expected to be launched in the first half of 2025.
Forrest Norrod, Executive Vice President and General Manager of AMD's Data Center Solutions Division, said, "With the new AMD Instinct Accelerator, EPYC Processor, AMD Pensando Network Engine, open software ecosystem, and the ability to integrate these things into AI infrastructure, AMD fully possesses the key expertise to build and deploy world-class AI solutions
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

因醉鞭名马幌 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    43