首页 News 正文

Challenge Huang Renxun? Meta Chief AI Scientist: Super Artificial Intelligence Will Not Come Soon

白云追月素
281 0 0

Last weekend, Facebook's parent company Meta held a media event in San Francisco to celebrate the 10th anniversary of its foundational artificial intelligence research team.
At this event, Yann LeCun, Chief AI Scientist at Meta, Pioneer of Deep Learning, and Turing Award winner, stated that he believes it will take decades for humans to train an artificial intelligence system that not only has the ability to summarize text, but also has a sense of human like perception and common sense.
His views are completely different from those of NVIDIA CEO Huang Renxun last week. Huang Renxun stated last week that artificial intelligence will grow to a "fairly competitive" level in less than five years, surpassing humans in many brain intensive tasks.
The future is more likely to see the emergence of "cat level" artificial intelligence
Yang Likun said that before the emergence of "human level" artificial intelligence, society is more likely to have "cat level" or "dog level" artificial intelligence. The current focus on language models and textual data in the technology industry is insufficient to create the advanced humanoid artificial intelligence systems that researchers have been dreaming of for decades.
Yang Likun believes that one of the reasons limiting the current development speed of artificial intelligence is that the source of training data is mainly limited to text.
"Text is a very poor source of information," explained Yang Likun. Currently, training a modern language model requires a huge amount of text, which takes humans 20000 years to read. However,
"Even if you train a system with reading materials equivalent to 20000 years, they still don't understand: if A and B are the same, then B is the same as A. There are many very basic things in the world that they cannot understand through this kind of training."
Therefore, Yang Likun and other executives at Meta AI have been vigorously researching how to customize so-called converter models for creating applications such as ChatGPT to process various data (including audio, image, and video information). They believe that only when these artificial intelligence systems can discover the billions of hidden correlations that may exist between these different types of data, are they more likely to reach higher levels.
Meta executives presented one of their research findings: a person wearing AR glasses playing tennis can see AI visual cues, teaching them how to correctly grip a tennis racket and swing their arms in a perfect way. The AI model required to provide power for such digital tennis assistants requires a mixture of 3D visual data in addition to text and audio.
Nvidia will continue to benefit
These so-called multi model artificial intelligence systems represent the next frontier, but the cost required for this development will be even greater. Yang Likun predicts that as more and more companies such as Meta and Google's parent company Alphabet research more advanced artificial intelligence models, Nvidia may gain greater advantages, especially in the absence of other competitors.
Nvidia has always been one of the biggest beneficiaries of generative artificial intelligence, and its expensive GPU has become the standard tool for training large-scale language models.
"I know Jensen (Huang Renxun)," Yang Likun said, "Nvidia can benefit a lot from the current AI boom." This is an AI war, he's providing weapons. "
If you think artificial intelligence is popular, you have to buy more GPUs, Yang Likun said. As long as researchers from companies like OpenAI continue to pursue AGI (General Artificial Intelligence), they will need more Nvidia computer chips.
So, as Meta and other researchers continue to develop such complex artificial intelligence models, does the technology industry need more hardware suppliers?
Yang Likun's answer to this is: "Currently not needed, but if there is, it would be even better."
He also added that GPU technology remains the gold standard for artificial intelligence. However, future computer chips may not be referred to as GPUs.
The significance and feasibility of quantum computers are questionable
In addition to artificial intelligence, Yang Likun also holds a skeptical attitude towards quantum computers.
Currently, technology giants such as Microsoft, IBM, and Google have invested significant resources in the field of quantum computing. Many researchers believe that quantum computing machines can make significant progress in data intensive fields such as drug discovery, as they can perform multiple computations using so-called quantum bits (instead of traditional binary bits used in modern computing).
But Yang Likun expressed doubts about this:
"The problems you can solve with quantum computing can also be solved faster with classical computers."
"Quantum computing is a fascinating scientific topic," Yang Likun said, but it is currently unclear "the practical significance of quantum computers and the possibility of manufacturing truly useful quantum computers.".
Mike Schroepfer, Senior Researcher at Meta and former Technology Director, agrees with this. He evaluates quantum technology every few years and believes that useful quantum machines "may appear at some point, but their time span is too long and unrelated to what we are currently doing."
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

白云追月素 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    39