首页 News 正文

Nvidia Announces Top AI Chip H200 with 90% Performance Improvement, Its Stock Price Rises 1.4% on the Same Day

go149
3383 0 0

On November 13th local time, NVIDIA released its latest AI chip H200 Tensor Core GPU, equipped with 141 GB of graphics memory, capable of processing massive amounts of data for generative AI and high-performance computing workloads. According to official disclosure, compared to the previous generation H100 chip of the Nvidia H series, the capacity has almost doubled and the performance has improved by up to 60% to 90%. The previous generation H100 chip, which is the chip used by OpenAI, the world's most highly regarded artificial intelligence startup, to train the large language model GPT-4.
Broadly speaking, any processor that can run AI algorithms can be referred to as an AI chip, including GPU, FPGA, ASIC, etc. In a narrow sense, AI chips are designed for AI algorithms, often sacrificing a certain degree of universality in exchange for improving the specialized performance of AI algorithms.
According to New Intelligence, the Nvidia H200 is the first GPU to use the HBM3e extended version of high bandwidth storage, with a graphics memory of up to 141GB, nearly doubling compared to the 80GB of the previous generation H100. H200 and H100 are based on the same Hopper architecture, which can accelerate LLM and other deep learning models based on Transformer architecture.
The performance improvement of H200 is mainly reflected in the inference performance of large models, and its processing speed of Llama270B large language model is almost twice that of H100. The performance of H200 processing high-performance computing (HPC) applications has also improved by nearly 20% compared to the previous generation.
The bandwidth of graphics memory is crucial for HPC applications, enabling faster data transfer and reducing complex processing bottlenecks. For memory intensive HPC applications such as simulation, scientific research, and artificial intelligence, the higher memory bandwidth of H200 ensures efficient access and operation of data, and the time to obtain results can be up to 110 times faster compared to CPUs.
In addition, according to NVIDIA's official website, H200 has also improved energy efficiency, reducing energy consumption by approximately 50% compared to H100.
At present, NVIDIA's global partner server manufacturer ecosystem includes Huaqing Rack, Asus, Dell Technology, Eviden, Gigabyte, Huihe, Ingra Technology, Lenovo, QCT, Supermicro, Wistron, and Weiying Technology, among others, which can directly update their existing systems using H200. In addition to NVIDIA's own investments in CoreWeave, Lambda, and Vultr, cloud service providers such as Amazon Network Services, Google Cloud, Microsoft Azure, and Oracle Cloud will deploy the first batch of H200 chips starting next year.
On the day of the H200 chip release, NVIDIA's stock rose 0.59%, with an intraday gain of 1.4%. The official website of Nvidia has not yet disclosed the selling price of the H200 chip. According to CNBC, the previous generation chip H100 began mass production earlier this year, with a price of approximately $25000 (approximately 182000 RMB) per GPU.
According to New Intelligence, NVIDIA's supercomputer GH200, which integrates H100 GPU and Grace CPU, will provide a total of approximately 200 Exaflops of AI computing power to supercomputing centers around the world with the support of the new generation H200 chip. The Nvidia GH200 will be used to build supercomputers in major supercomputing centers around the world, such as the Ulrich Supercomputing Center in Germany, the Japan Advanced High Performance Computing Joint Center, the Texas Advanced Computing Center, and the National Supercomputing Application Center in the United States.
According to foreign media reports, industry experts have stated that AMD's upcoming MI300 accelerator is expected to become a competitive AI product. AMD will hold a keynote speech on December 6th on "Promoting Artificial Intelligence", unveiling the Instinct accelerator series codenamed MI300.
The development and cross-border sales of AI chips are also at the center of the international political vortex. In October of this year, the United States tightened its export of AI chips to China. The US government required Nvidia to obtain a license before selling chips such as A100 to Chinese customers, which hindered Nvidia and Intel's presence in the Chinese market.
At present, domestic AI chip related manufacturers mainly include Cambrian, Jingjiawei, Yuntian Lifei, Hengshuo Co., Ltd., Haiguang Information, Fudan Microelectronics, Anlu Technology, Lanqi Technology, Hangyu Micro, Guoxin Technology, Ziguang Guowei, Guoke Micro, etc. Among them, Cambrian ASIC chips are the leading domestic AI chips, while Jingjiawei GPU chips are the leading domestic GPU chips.
In addition, other Chinese technology giants and startups have also established a certain layout in the AI chip field.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

  •   每经AI快讯,据亿航智能官微消息,公司EH216-S无人驾驶电动垂直起降航空器(eVTOL)获得巴西国家民航局颁发的试验飞行许可证书,并计划在巴西进行测试和试飞。关于EH216-S无人驾驶eVTOL在巴西的认证,中国民航局 ...
    潇湘才子
    昨天 08:41
    支持
    反对
    回复
    收藏
  •   今年7月,美国三大海外“债主”所持美国国债齐刷刷缩水,其中日本美债持仓已降至去年10月以来最低。   根据美国财政部当地时间9月18日公布的国际资本流动报告(TIC),2024年7月,美国前三大海外“债主”日本 ...
    520hacker
    3 天前
    支持
    反对
    回复
    收藏
  •   上证报中国证券网讯(记者俞立严)9月19日,蔚来全新品牌乐道的首款车型——乐道L60正式上市。新车定位家庭智能电动SUV,在采用BaaS电池租用服务后,L60的售价可低至14.99万元,电池租用月费最低为599元。乐道L6 ...
    anhao007
    前天 11:03
    支持
    反对
    回复
    收藏
  • 【博通或未在评估对英特尔发出收购要约】知情人士透露,博通目前没有在评估向英特尔发出收购要约。该公司曾评估过是否寻求交易,顾问在继续向博通提出建议。 ...
    jnengw
    2 小时前
    支持
    反对
    回复
    收藏
go149 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    0