首页 News 正文

Meta releases open-source big model Llama 3.1 with strong support from Nvidia

网事大话每
1234 0 0

Science and Technology Innovation Board Daily, July 24th (Reporter Zhang Yangyang) Zuckerberg will continue to open source big models to the end.
Early this morning, Meta officially released the new generation of open-source big model Llama 3.1 series, which includes three versions: 8B, 70B, and 405B, with a maximum context increase of 128k.
Meta founder Mark Zuckerberg also posted on the official website to strongly endorse his own model. He said that most leading technology companies and scientific research today are built on open source software, which is the direction for AI to move forward, and Meta is moving towards becoming the industry standard for open source AI.
It should be emphasized that in the technology industry, the dispute over open source and closed source has a long history. Critics argue that open source conceals a lack of technological originality and only makes simple adjustments to the open source model, rather than substantive innovation. Robin Lee, the founder of Baidu, even said that the open source model has value in academic research, teaching and other specific scenarios, but it is not applicable to most application scenarios. Supporters believe that customized improvements based on mature open source architectures are the norm of technological development, which can drive rapid innovation and progress in technology.
In the field of big models, there is often a comparison of the advantages and disadvantages between open source and closed source big models. So far, open-source models have mostly lagged behind closed models in terms of functionality and performance. But with the release of Llama 3.1, there may be a new round of intense competition between open source and closed source big models.
According to benchmark data provided by Meta, Llama 3.1 has 405 billion parameters, making it one of the largest large-scale language models in recent years. This model is trained on 15 trillion tokens and over 16000 H100 GPUs, making it the first Llama model in Meta's history to be trained on this scale. Meta states that in terms of advanced features such as common sense, manipulability, mathematics, tool usage, and multilingual translation, Llama 3.1 is sufficient to benchmark top closed source big models such as GPT-4o and Claude 3.5Sonnet.
Llama 3.1 is now available for download on the Meta official website and Hugging Face. The latest data shows that the total download volume of all Llama versions has exceeded 300 million times.
At the same time on the same day, Nvidia also launched a combination training service, providing strong assists for Llama 3.1.
The reporter from the Science and Technology Innovation Board Daily learned from Nvidia that Nvidia has officially launched new NVIDIA AI Foundry services and NVIDIA NIM inference microservices. NVIDIA AI Foundry is driven by the NVIDIA DGX Cloud AI platform, which is jointly designed by NVIDIA and public cloud and can provide enterprises with a large amount of computing power resources.
NVIDIA AI Foundry and NVIDIA NIM are used together with the Llama 3.1 series open source models, allowing enterprises to create custom "super models" for their specific industry use cases. Enterprises can also use their own data and synthetic data generated by Llama 3.1 405B and NVIDIA Nemotron Reward models to train these super models.
Nvidia founder and CEO Huang Renxun stated that Meta's Llama 3.1 open-source model marks a critical moment for global enterprises to adopt generative AI. Llama 3.1 will ignite a wave of enterprises and industries creating advanced generative AI applications. NVIDIA AI Foundry has integrated Llama 3.1 throughout the entire process and is able to assist enterprises in building and deploying custom Llama hypermodels.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

网事大话每 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    3