首页 News 正文

Nvidia launches "heavyweight" late at night, expected to launch Blackwell Ultra AI chip in 2025 | Big Model World

大和797
1228 0 0

On the evening of June 2nd, Huang Renxun, founder and CEO of NVIDIA, gave a speech on stage and revealed many key information. According to him, developers using NVIDIA NIM to deploy AI models on clouds, data centers, or workstations can shorten model deployment time from weeks to minutes. Customers such as Heshuo, Lloyd's, Siemens are all using it.
In addition, the new generation of AI chips and supercomputing platform Blackwell chips, which Nvidia has high hopes for, have begun production and are expected to launch Blackwell Ultra AI chips in 2025.
NVIDIANIM can shorten model deployment time from weeks to minutes
On the evening of June 2nd, Huang Renxun, the founder of NVIDIA dressed in leather, once again played with his own products on stage and introduced NVIDIANIM, a reasoning microservice that can provide models in optimized container form, aiming to assist enterprises of all sizes in deploying AI services.
However, strictly speaking, NVIDIANIM is not a new product and first appeared in March of this year. Nvidia announced on the evening of June 2nd that 28 million developers worldwide can download NVIDIANIM, deploy AI models on clouds, data centers, or workstations, and build generative AI applications such as Copilot (an AI assistant) and ChatGPT chatbots. Starting next month, members of the NVIDIA Developer Program can use NIM for free to conduct research, development, and testing on the infrastructure they choose.
According to Nvidia, new generative AI applications are becoming increasingly complex, often requiring the use of multiple models with different functionalities to generate text, such as images, videos, speech, etc. NVIDIANIM provides a simple and standardized way to add generative AI to applications, which can shorten model deployment time from weeks to minutes.
Huang Renxun also revealed that nearly 200 technology partners, including Cadence, Cloudera, Coheity, DataStax, Network App, Scale AI, and Xinsi Technology, are integrating NIM into their platforms to accelerate the deployment of generative AI. "Every enterprise hopes to integrate generative AI into its operations, but not every enterprise has a dedicated AI research team. NVIDIA NIM can be integrated into any platform, accessible to developers from anywhere, and can run in any environment." Huang Renxun said.
The Daily Economic News reporter learned that NIM is pre built and currently has nearly 40 models available as endpoints for developers to experience; Developers can access NVIDIA NIM microservices suitable for the Meta Llama 3 model from the open-source community platform Hugging Face, and use Hugging Face inference endpoints to access and run the Llama 3 NIM.
It is worth noting that Nvidia has also revealed the usage of a group of major customers, such as electronic manufacturer Foxconn, which is using NIM to develop Large Language Models (LLMs) for specific fields, such as intelligent manufacturing, smart cities, and smart electric vehicles; Heshuo is using NIM for a local mixed expert (MoE) model; Lloyd's is using NVIDIA NIM inference microservices to enhance the experience of employees and customers; Siemens is integrating its operational technology with NIM microservices for workshop AI workloads; Dozens of healthcare companies are also deploying NIM to support generative AI reasoning in application areas including surgical planning, digital assistants, drug discovery, and clinical trial optimization.
Blackwell chips begin production
In addition to the aforementioned products, Huang Renxun also revealed in his speech that Nvidia Blackwell chips have started production and will launch Blackwell Ultra AI chips in 2025.
In May of this year, Huang Renxun stated during a earnings conference call that it is expected that Blackwell architecture chips will bring a significant amount of revenue to the company this year. Nvidia's high expectations for Blackwell chips are still related to strong market demand. According to the latest disclosed financial report data, Nvidia achieved a revenue of $26 billion in the first quarter of the fiscal year 2025, an increase of 262% compared to the same period last year. Among them, the revenue of the data center business was 22.6 billion US dollars, an increase of 427% compared to the same period last year, making it the "leader" in performance revenue.
According to Colette Kress, Chief Financial Officer of Nvidia, the growth of data center business is due to the increase in shipments of Hopper architecture GPUs (such as H100); One of the important highlights of the quarter was Meta's announcement of the launch of the Lama 3 open-source large model, which used nearly 24000 H100 GPUs.
In addition to disclosing the progress of chip mass production, Nvidia has also launched a series of systems using the NVIDIA Blackwell architecture.
It is reported that these systems are equipped with GraceCPU and NVIDIA network and infrastructure to assist enterprises in establishing AI factories and data centers. Among them, the NVIDIA MGX modular reference design platform has added support for NVIDIA Blackwell products, including the NVIDIA GB200 NVL2 platform designed to provide excellent performance for mainstream large language model inference, retrieval enhancement generation, and data processing.
Nvidia emphasizes that the GB200 NVL2 is suitable for emerging fields such as data analysis. With the bandwidth memory performance brought by NVLink C2C interconnect technology and the specialized decompression engine in Blackwell architecture, the data processing speed can be up to 18 times faster than when using X86 CPU, and energy efficiency can be improved by 8 times. "A new round of industrial revolution has begun, and many enterprises and regions are collaborating with NVIDIA to promote the transformation of traditional data centers worth trillions of dollars towards accelerated computing, and building a new type of data center AI factory to produce new products, artificial intelligence," said Huang Renxun.
Nvidia stated that more than 90 released or under development systems from over 25 partners have used the MGX reference architecture, reducing development costs by up to three-quarters compared to before and shortening development time to six months, a two-thirds decrease compared to before. In addition, Nvidia also revealed that more than ten global robotics companies, including BYD Electronics, Siemens, Tereida, and Intrinsic, a subsidiary of Alphabet, are integrating NVIDIAIsaac acceleration libraries, physics based simulations, and AI models into their software frameworks and robotics models to improve the efficiency of factories, warehouses, and distribution centers.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

大和797 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    2