首页 News 正文

When missionary Huang Renxun met tech savvy Su Zifeng, a technology war in the field of AI chips had already begun

123458039
1245 0 0

In early June, the leaders of the world's two major AI arms dealers once again faced off directly. As both Chinese and on the same AI chip battlefield, the story of Nvidia CEO Huang Renxun and AMD CEO Su Zifeng's opponents has always been talked about. This time, they have delivered AI themed speeches, product planning roadmaps, and new ideas for the future world.
As chip companies, Nvidia and AMD are both beneficiaries in the frenzy of AI. Nvidia holds over 80% of the AI chip market share, and its stock price has risen by over 130% since the beginning of this year alone, with a market value soaring to $2.8 trillion, approaching Apple, the world's second highest. AMD also faced off head-on with Nvidia, attracting orders from customers such as Microsoft and Google, with a market value exceeding $300 billion in March.
If Huang Renxun is like an AI missionary in his speech, talking about the rise of generative artificial intelligence and future intelligent robots, then Su Zifeng is more like a real tech enthusiast, speaking more about the performance updates of AMD's new products. Despite differences in speech style and content, both leaders wielding AI computing power signaled the future direction of the AI revolution in their speeches.
Accelerate everything
With the global warming of the AI competition, the supply of AI chips is in short supply, and Nvidia and AMD, two chip giants, have accelerated the pace of product updates. The chip architecture has changed from being updated every two years to every year, with a strong momentum to break Moore's Law.
Huang Renxun revealed in his speech on June 2nd that Nvidia will launch Blackwell Ultra AI chips in 2025, the next generation architecture Rubin in 2026, and Rubin Ultra in 2027.
The next day, Su Zifeng followed suit and released AMD's AI chip roadmap. The new chip MI325X will be launched in the fourth quarter of this year, and the MI350 series will be launched in 2025. It is expected that the inference performance will be 35 times higher than the existing MI300 series chips, and the MI400 series will be launched in 2026.
"The existence of this annual rhythm is due to the market's need for updated products and features," Su Zifeng said in a press conference after the speech. "This does not mean that every customer will use every product, but we will launch the next major product every year, so that we can have the most competitive product portfolio."
In a two-hour speech, Huang Renxun elaborated on why AI accelerator chips such as graphics processors (GPUs) are needed for inference and training of AI models. He mentioned that computing inflation has brought serious challenges, with data growing exponentially, power consumption in global data centers sharply rising, and computing costs constantly rising. However, the expansion of CPU performance is slowing down or even stagnant. In order to accelerate the completion of computing tasks, Nvidia invented an innovative architecture of GPU+CPU for parallel processing.
Huang Renxun also proposed the unique concept of "CEO Mathematics", explaining it as "the more you buy, the more you save.". He stated that by using dedicated processors, a significant amount of previously overlooked performance improvements can be regained, thereby saving a lot of money and energy. The task that originally required 100 time units to complete may now be achievable in just 1 time unit. The calculation accelerates by 100 times, while the power only increases by 3 times and the cost only increases by 1.5 times.
Huang Renxun emphasized that the significant reduction in computing costs will enable the market and developers to continuously explore new applications, thereby promoting the accelerated development of more applications. This will also create a virtuous cycle, as Su Zifeng mentioned at a press conference, some new acceleration technologies are largely driven by applications.
Driven by the massive demand for data generation and technological advancements, Nvidia believes that the era of data centers with millions of GPUs is just around the corner. This also poses requirements for AI computing infrastructure, including interconnections between chips within networks and data centers.
Huang Renxun introduced that Nvidia's Ethernet architecture Spectra-X, specifically designed for AI, will keep pace with AI chips to meet the communication needs of larger GPUs. The Spectra-X network platform was launched last year at COMPUTEX 2023, and its network performance has increased by 1.6 times compared to traditional Ethernet architectures, which can accelerate AI workloads.
Nvidia currently adopts the fifth generation NV Link technology, which is used for connecting CPUs and GPUs, as well as for interconnecting multiple GPUs. Although Nvidia has taken the lead and almost defined the industry standard for AI chips, its excessive advantages have also attracted the vigilance of the entire industry.
In order to compete against this technology and seize Nvidia's share in the AI chip market, technology giants such as AMD, Intel, Google, and Microsoft have decided to unite and establish a new industry organization, UALink, on May 30th. The organization hopes to establish technical standards for AI accelerator connectivity and provide the technology to enterprises within the alliance.
In Su Zifeng's speech, it was also mentioned that UALink based on Ethernet technology is the best solution to expand the scale of accelerator chips.
Seize the PC side AI chip market
2024 is known as the AI PC Year. The war between Nvidia and AMD, two major AI chip manufacturers, has also ignited on the PC end, and AI PC has become a common keyword in the speeches of the two leaders.
In Huang Renxun's view, PC will become a crucial artificial intelligence platform in the future. The device will provide assistance and support to users in the background, running applications enhanced by artificial intelligence, and even carrying virtual intelligent robots that are more human like.
During her speech, Su Zifeng invited executives from various partners such as Microsoft, HP, Asus, and Lenovo to come on stage and introduce several upcoming AI PCs, all of which use AMD's brand new processors.
But the two chip companies have different choices in AI PC processors. Nvidia relies on its powerful GPUs to provide computing power for AI PCs. Currently, 100 million AI PCs equipped with GeForce RTX GPUs have been put into use worldwide, and Nvidia calls them the "best AI PC.". AMD, Intel, Qualcomm, and other manufacturers embed NPUs specifically designed for AI into chips and combine them with CPUs and GPUs to enhance AI processing capabilities.
At the exhibition, AMD not only previewed the fifth generation EPYC chip, which is known as the world's strongest data center CPU, but also released the Ryzen AI 300 series for the laptop field, integrating NPUs, GPUs, and CPUs, as well as the Ryzen 9000 series processor for desktop computers.
Su Zifeng explained at a press conference, "You need to equip each workload with a suitable engine. CPUs are great for more traditional workloads, GPUs are suitable for graphics processing and gaming, and NPUs really help us achieve AI specific acceleration.". It's natural for us to add all these new engines
Unlike Nvidia, which started with gaming graphics cards and has long focused on GPUs, AMD supports providing computing solutions across different types of chips. AMD is known as the "second in a millennium" and initially focused on the CPU track, but lost to its competitor Intel. After acquiring graphics processor company ATI, AMD encountered Nvidia in the GPU market and is still under Nvidia's shadow to this day. AMD was once on the brink of bankruptcy, only to turn losses into profits under the leadership of Su Zifeng, and its current market value has far surpassed that of Intel.
Not only NVIDIA and AMD are chip companies eyeing the AI PC track, but Cristiano Amon, the President and CEO of another giant, Qualcomm, took over the speech at COMPUTEX 2024, discussing Qualcomm's vision for leading the AI PC. At Microsoft's press conference in May, Qualcomm has become the focus of the PC industry, and Microsoft's first batch of AI PCs, Copilot+PC, are all equipped with Qualcomm Snapdragon X series processors. Intel also launched the "AI PC Acceleration Program" last year, and in March of this year, it teamed up with Microsoft to define the AI PC standard.
Perhaps the current pattern of "one superpower, multiple giants" in the field of AI chips will continue for several years, but the most important narrative in the technology industry is the subversion of giants by latecomers, and Nvidia's high wall is not indestructible.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

123458039 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    1