首页 News 正文

21 Depth | Huang Renxun's "Myth of Man and Moon": 25000 People in 3 Years and December

123458039
1264 0 0

NVIDIA GTC conference lasts for four days, with no reduction in heat waves.
On global social media, Huang Renxun, founder and CEO of Nvidia, has become a star among attendees who are eager to take photos, and Nvidia has almost become synonymous with AI; The GPU chip architecture Blackwell, large model GROOT, and NIM microservices have become new buzzwords and continue to dominate the screen.
Among them, the most noteworthy is undoubtedly the Blackwell architecture, which is the basis for the popular B200 and GB200 chips. Huang Renxun pointed to products such as GB 200 chips and HGX B200 server motherboards in his speech and said, "I need to be careful, this is worth 10 billion US dollars."
The story behind the strongest AI chips on the surface has always been eye-catching. Everyone is curious about how many levels of resources Huang Renxun has mobilized and how top-level talent teams have gathered, which has created the current soaring computing power empire.
Although Nvidia has not disclosed the specific research and development amount of Blackwell, a set of data can be used as a side view. Forrester Vice President and Chief Analyst Dai Kun told 21st Century Business Herald reporters that Huang Renxun revealed the investment situation of Blackwell at the communication meeting that day, with about 25000 people jointly developing it for 3 years.
Nvidia currently has approximately 30000 employees, and behind everything lies talent and effective organization. The previous generation architecture Hopper directly brought Nvidia to a market value of $2 trillion, and the industry will continue to pay attention to how the new generation Blackwell will create history for Nvidia.
"The Myth of Man and Moon"
Based on public information, we can roughly estimate Nvidia's manpower expenses.
The median annual salary per capita for Nvidia is approximately 217500 US dollars. Using this as a very simple and rough calculation of labor costs, 250002175003=16312500000 US dollars, which means a three-year labor compensation of approximately 16.3 billion US dollars. Of course, R&D personnel may hold multiple positions and may not work 365 days, and their annual salaries may vary. However, even if we take half of the value, the investment cost of manpower is already close to billions of dollars.
And this is only the part of manpower, which also shows that these seemingly huge R&D cost data are reasonable in terms of funding when measured on a human scale. Industry insiders have also been lamenting to reporters that Nvidia has money and people have vision.
"This reminds me of a masterpiece in the software industry called 'The Mythical Man Month'," a senior IT practitioner told 21st Century Business Herald reporters. "It is a classic software engineering masterpiece written by the tech guru Frederick P. Brooks, and is one of the most well-known books in the software industry worldwide. This book is not about the story of people and the moon, but it is even greater than the story of people and the moon. The book tells the cost accounting method of creating a software in the software engineering industry, which is to use as many people and months, so it is called the Mythical Man Month."
This concept of "person month" computing thinking has also been popularized in the technology industry. Huang Renxun has always emphasized that Nvidia is a software company, and in fact, the integration of software and hardware is now inseparable. Over the years, "contractor" Huang Renxun has led a group of R&D personnel to create NVIDIA's "human moon myth".
According to "The Myth of Man and Moon," humans and the moon have become the two grassroots measures of software industry research and development. People represent the cost and scale of technical workers, while the month represents the time that software needs to invest in the face of fierce market competition. Using a scientific approach to quantify the human cost of great software development systems, guided by people and time.
25000 people, 3 years and 36 months, Nvidia has iterated the new generation architecture Blackwell. With the launch of its products this year, it will become the mainstream in the market by 2025. And a new architecture is already on the way of development, and a new round of people, moon, and mythology will begin again.
The technical team behind it is also continuously expanding. In terms of the proportion of R&D personnel, Nvidia had 29600 employees in 2024, of which 22200 were engaged in R&D work, accounting for 75% of the total number of employees. In 2023, there were 19532 R&D personnel, accounting for a high proportion of 74.5%.
As R&D personnel continue to grow, Nvidia's R&D investment is also increasing. According to Zhongtai Securities, AMD's R&D expenses in 2005 were $1.1 billion, about 3.2 times that of NVIDIA. However, in 2022, NVIDIA's R&D expenses reached $7.34 billion (corresponding to the 2023 fiscal year), 1.47 times that of AMD. By 2023, the research and development amount will continue to increase to 8.68 billion US dollars.
"Don't sell chips, sell data centers"
Before B100, Nvidia's A100 and H100 had already gained a great reputation among the public. On the one hand, they were a typical example of US export restrictions, and on the other hand, they were also the core chips of ChatGPT training capabilities. From investors to programmers in China, everyone talks about A100 and H100, and computing anxiety has become a new type of anxiety. Perhaps there has never been an electronic product like Nvidia's graphics card that you hear about every day, but you don't have to use it.
Nowadays, with the launch of Blackwell GPUs, the Chinese market is bound to become one of the focuses of attention. In an interview with the media, Huang Renxun said, "We have launched L20 and H20 chips for the Chinese market, and we are doing our best to serve customers in China and other regions."
When it comes to policy restrictions, he said that first of all, it is necessary to ensure understanding of the policy, and secondly, to strengthen supply chain resilience as much as possible. "When we configure these parts as DGX (AI supercomputers), we need hundreds of thousands of parts from all over the world. Of course, many of them come from China, and the world supply chain is quite complex," said Huang Renxun.
Nvidia is also facing new competitive relationships in its industry chain. On the one hand, major cloud manufacturers and customers are intensifying their research and development of AI chips. On the other hand, Nvidia has also launched cloud services, allowing everyone to interweave on different tracks.
But the two have different strategies for business models. For example, Huang Renxun emphasized that Nvidia does not sell chips, it sells data centers. He also pointed Nvidia's opportunity to data centers, "If you produce GPUs, there will be many people manufacturing GPUs, and the GPU market is different from the data center market we are pursuing. The global data center market is about $250 billion, and it is rapidly transitioning towards accelerated computing and still growing, which is our opportunity."
This means that Nvidia is now selling a complete set of computing power solutions, rather than just chip hardware. So he has been saying that Nvidia has an advantage in total operating costs (TCO), even if its competitors' chips are free, they are not cheap enough in the end.
And Nvidia's cloud services are just a solution. "Although Nvidia has launched Nvidia's cloud service DGX Cloud, our strategy is still to collaborate with cloud service providers and put our cloud into their cloud. Nvidia will not become a cloud computing company, our goal is to build software to enable developers and cloud service providers worldwide to adopt Nvidia's architecture to build products," said Huang Renxun.
Regarding the relationship between Nvidia and cloud vendors, Dai Kun analyzed to reporters, "Competition is also complementary, for example, the DGX architecture is placed on AWS. Currently, the revenue from cooperation far exceeds the losses of competition, and DGX is one of the main forms of computing power delivery. Now Nvidia achieves ubiquitous supply of computing power to artificial intelligence technology innovation in a unified way and through different channels."
With the intersection and integration of business between enterprises, the competitive relationship of industries is also changing.
Combining vertical and horizontal directions
In Huang Renxun's view, AI has sparked a new industrial revolution, which is still in its early stages. He called on the industry to join and prepare together.
With the continuous iteration of big models such as GPT, Sora, Claude, and Lama, the demand for AI chips in the industry has been increased. Whether it is Nvidia, Intel, AMD, Huawei, etc., they are all exploring new growth points.
Nvidia has further strengthened its barriers on the original basis, and Blackwell GPU goes without saying much. There are two other actions worth paying attention to. One is the improvement of software products, and the other is close cooperation with the industry chain.
Firstly, it should be noted that Nvidia is also a software company, and under its official website product menu, there are only two directories: hardware and software. The software directory is longer than the hardware directory. Huang Renxun's foresight lies in his understanding that hardware companies cannot focus on hardware, but on ecology. The other hand of ecology is software, such as CUDA.
It can be said that CUDA is the Linux+Windows of the computing world. CUDA has community developers, a wide range of commercial applications, and even the cloud. Nowadays, almost all AI chips are compatible with CUDA, and beyond the moat of CUDA, Nvidia is establishing new barriers towards generative AI, namely the "AI Foundry" used to build software.
Huang Renxun stated that NVIDIA aims to become a contract manufacturer in the AI industry, with the three core pillars being NIM microservices (inference platform), NEMO fine-tuning services, and DGX Cloud. The newly launched NIM microservice includes all pre trained AI models, allowing developers to easily and quickly build AI applications. NEMO serves as a tool to fine tune and modify large models, and DGX Cloud provides computing infrastructure. All of these will change the way applications are developed in the future. Programmers will no longer have to rewrite code, and with the help of NIM's various microservices, they can quickly complete the core processes.
On the other hand, Nvidia is also further strengthening its relationship with the supply chain, with particular attention paid to TSMC, Samsung, and SK Hynix. Because TSMC's advanced CoWoS packaging and storage plant's HBM (an efficient storage chip) are currently bottlenecks affecting the scale of GPU mass production.
Huang Renxun also told the media, "Our demand for CoWoS is very high this year." It is reported that TSMC plans to invest $16 billion to build an advanced packaging plant. The CoWoS production capacity target for this year is 35000 wafers per month, and will be further increased to 44000 wafers per month by the end of 2025. With the opening of new factories, TSMC continues to increase CoWoS production capacity.
In addition to advanced packaging, the demand for high-performance storage in AI is also increasing, and HBM continues to be popular. Currently, Samsung and SK Hynix are accelerating their expansion of HBM products.
According to TrendForce consulting data, it is estimated that by the end of 2024, the overall DRAM industry's planned production capacity of HBM TSV will account for about 14% of the total DRAM capacity, with a supply level growth of about 260% annually. In addition, the proportion of HBM output value to the overall DRAM industry in 2023 is about 8.4%, and it will expand to 20.1% by the end of 2024.
"HBM is very complex and has high added value. We have invested a lot of money in this business. Don't think that HBM is DDR5, this is completely different. Those DDR memory will become HBM in the future, and Nvidia will also grow as they grow." Huang Renxun said, hoping to have closer cooperation with Samsung and Hynix.
It can be seen that NVIDIA, which dominates the world, is constantly evolving itself and strengthening its supply chain. And new journeys and competitions have also begun. After the B100 series, there may be products such as X100 and GX200, and new human month stories are beginning to unfold.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

  •   知名做空机构香橼研究(Citron Research)周四(11月21日)在社交媒体平台X上发布消息称,该公司已决定做空“比特币大户”微策略(Microstrategy)这家公司,并认为该公司已经将自己变身成为一家比特币投资基金 ...
    caffycat
    9 小时前
    支持
    反对
    回复
    收藏
  •   每经AI快讯,11月20日,文远知行宣布旗下自动驾驶环卫车S6与无人扫路机S1分别在新加坡滨海湾海岸大道与滨海艺术中心正式投入运营。据介绍,这是新加坡首个商业化运营的自动驾驶环卫项目。 ...
    star8699
    前天 19:48
    支持
    反对
    回复
    收藏
  •   上证报中国证券网讯(记者王子霖)11月20日,斗鱼发布2024年第三季度未经审计的财务报告。本季度斗鱼依托丰富的游戏内容生态,充分发挥主播资源和新业务潜力,持续为用户提供高质量的直播内容及游戏服务,进一步 ...
    goodfriendboy
    前天 20:09
    支持
    反对
    回复
    收藏
  •   人民网北京11月22日电 (记者栗翘楚、任妍)2024广州车展,在新能源汽车占据“半壁江山”的同时,正加速向智能网联新能源汽车全面过渡,随着“端到端”成为新宠,智能驾驶解决方案成为本届广州车展各大车企竞 ...
    3233340
    3 小时前
    支持
    反对
    回复
    收藏
123458039 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    1