Nvidia Announces Top AI Chip H200 with 90% Performance Improvement, Its Stock Price Rises 1.4% on the Same Day
go149
发表于 2023-11-14 16:02:40
3401
0
0
On November 13th local time, NVIDIA released its latest AI chip H200 Tensor Core GPU, equipped with 141 GB of graphics memory, capable of processing massive amounts of data for generative AI and high-performance computing workloads. According to official disclosure, compared to the previous generation H100 chip of the Nvidia H series, the capacity has almost doubled and the performance has improved by up to 60% to 90%. The previous generation H100 chip, which is the chip used by OpenAI, the world's most highly regarded artificial intelligence startup, to train the large language model GPT-4.
Broadly speaking, any processor that can run AI algorithms can be referred to as an AI chip, including GPU, FPGA, ASIC, etc. In a narrow sense, AI chips are designed for AI algorithms, often sacrificing a certain degree of universality in exchange for improving the specialized performance of AI algorithms.
According to New Intelligence, the Nvidia H200 is the first GPU to use the HBM3e extended version of high bandwidth storage, with a graphics memory of up to 141GB, nearly doubling compared to the 80GB of the previous generation H100. H200 and H100 are based on the same Hopper architecture, which can accelerate LLM and other deep learning models based on Transformer architecture.
The performance improvement of H200 is mainly reflected in the inference performance of large models, and its processing speed of Llama270B large language model is almost twice that of H100. The performance of H200 processing high-performance computing (HPC) applications has also improved by nearly 20% compared to the previous generation.
The bandwidth of graphics memory is crucial for HPC applications, enabling faster data transfer and reducing complex processing bottlenecks. For memory intensive HPC applications such as simulation, scientific research, and artificial intelligence, the higher memory bandwidth of H200 ensures efficient access and operation of data, and the time to obtain results can be up to 110 times faster compared to CPUs.
In addition, according to NVIDIA's official website, H200 has also improved energy efficiency, reducing energy consumption by approximately 50% compared to H100.
At present, NVIDIA's global partner server manufacturer ecosystem includes Huaqing Rack, Asus, Dell Technology, Eviden, Gigabyte, Huihe, Ingra Technology, Lenovo, QCT, Supermicro, Wistron, and Weiying Technology, among others, which can directly update their existing systems using H200. In addition to NVIDIA's own investments in CoreWeave, Lambda, and Vultr, cloud service providers such as Amazon Network Services, Google Cloud, Microsoft Azure, and Oracle Cloud will deploy the first batch of H200 chips starting next year.
On the day of the H200 chip release, NVIDIA's stock rose 0.59%, with an intraday gain of 1.4%. The official website of Nvidia has not yet disclosed the selling price of the H200 chip. According to CNBC, the previous generation chip H100 began mass production earlier this year, with a price of approximately $25000 (approximately 182000 RMB) per GPU.
According to New Intelligence, NVIDIA's supercomputer GH200, which integrates H100 GPU and Grace CPU, will provide a total of approximately 200 Exaflops of AI computing power to supercomputing centers around the world with the support of the new generation H200 chip. The Nvidia GH200 will be used to build supercomputers in major supercomputing centers around the world, such as the Ulrich Supercomputing Center in Germany, the Japan Advanced High Performance Computing Joint Center, the Texas Advanced Computing Center, and the National Supercomputing Application Center in the United States.
According to foreign media reports, industry experts have stated that AMD's upcoming MI300 accelerator is expected to become a competitive AI product. AMD will hold a keynote speech on December 6th on "Promoting Artificial Intelligence", unveiling the Instinct accelerator series codenamed MI300.
The development and cross-border sales of AI chips are also at the center of the international political vortex. In October of this year, the United States tightened its export of AI chips to China. The US government required Nvidia to obtain a license before selling chips such as A100 to Chinese customers, which hindered Nvidia and Intel's presence in the Chinese market.
At present, domestic AI chip related manufacturers mainly include Cambrian, Jingjiawei, Yuntian Lifei, Hengshuo Co., Ltd., Haiguang Information, Fudan Microelectronics, Anlu Technology, Lanqi Technology, Hangyu Micro, Guoxin Technology, Ziguang Guowei, Guoke Micro, etc. Among them, Cambrian ASIC chips are the leading domestic AI chips, while Jingjiawei GPU chips are the leading domestic GPU chips.
In addition, other Chinese technology giants and startups have also established a certain layout in the AI chip field.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Est é e Lauder unexpectedly withdraws its full fiscal year 2025 performance outlook, with stock prices plummeting nearly 20%
- Intel's Q4 revenue outlook higher than expected, stock price surges after hours
- Nvidia is rumored to abandon old orders for AMD computers, which have been transferred to other suppliers
- Trump elected to lead Tesla stock price, Musk's net worth skyrockets overnight by $20.9 billion
- The niche 'Trump deal' quietly skyrockets! The promise to expel immigrants triggers a surge in private prison stock prices
- Stock price rises to beat Nvidia! What insights does the emergence of AI applications in the US stock market bring to A-shares?
- Tesla's stock price plummeted by over 5% in pre-market trading in the US stock market
- Tesla's stock price fell more than 5% in pre-market trading in the US stock market
- Global markets: European and American stock markets close down, Dow Jones Industrial Average falls nearly 1%, Tesla stock price drops over 6%
- Can AMD still catch up with Nvidia by laying off thousands of employees globally and scaring the market?
-
据媒体报道,OpenAI正准备推出一款代号为“Operator”的全新AI助理产品,可以自动执行各种复杂操作,包括编写代码、预订旅行、自动电商购物等。根据内部员工爆料,OpenAI领导层预计将在2025年1月发布该产品,首 ...
- 永远的希望
- 6 小时前
- 支持
- 反对
- 回复
- 收藏
-
【小摩辣评“特朗普2.0”:关税大棒料打击经济、重燃通胀!】摩根大通资产管理公司(JPMorgan Asset Management)首席全球市场策略师David Kelly周三表示,当选总统特朗普激进的关税计划,可能会减缓全球经济,并给 ...
- besharp
- 4 小时前
- 支持
- 反对
- 回复
- 收藏
-
本报讯 (记者李豪悦)11月12日,腾讯音乐娱乐集团(以下简称“腾讯音乐”)宣布其截至2024年9月30日止第三季度的未经审计财务业绩。 2024年第三季度,腾讯音乐娱乐集团业绩表现稳健,总收入为70.2亿元,同 ...
- 覃志辉
- 前天 20:07
- 支持
- 反对
- 回复
- 收藏
-
新华财经上海11月13日电芯片制造商英伟达和软银集团的电信部门软银公司周三表示,两家公司已经试运行了全球首个人工智能和5G电信网络。 两家公司表示,该网络可以同时运行人工智能和5G工作负载,这一过程被 ...
- 惡魔獵人
- 昨天 12:36
- 支持
- 反对
- 回复
- 收藏