In just 13 days! OpenAI robots are stunning, capable of dialogue, thinking, and learning
白云追月素
发表于 2024-3-14 12:26:52
232
0
0
On Wednesday Eastern Time, half of Silicon Valley's investment in "Robot Industry OpenAI" and star startup Figure released their first robot demo with the support of OpenAI's big model. Just one video has stunned the global technology community.
What's even more amazing is that it has only been 13 days since Figure announced its collaboration with OpenAI in the field of AI humanoid robots.
In early March, Figure announced that it had raised approximately $675 million in funds from giants such as Amazon founders Bezos, Nvidia, OpenAI, and Microsoft, bringing the company's valuation to $2.6 billion.
The first OpenAI robot has arrived!
In the video released in Figure, it is shown that the robot Figure 01 can understand human commands and questions, and its movements are smooth and logical, handing apples to humans, tidying up garbage in the frame, and placing cups and plates on the drain rack.
And the most important thing is that many of the actions and answers in Figure 01 are based on the questioner's open-ended questions and requirements, based on logical thinking - which means it can converse, think, learn, and be more "human like" than ordinary robots.
At the beginning of the video, it is emphasized that the behavior of this robot is based on logical reasoning of speech, using end-to-end neural networks, and the entire video is 1 times faster without any acceleration.
Next, the main film begins, and the questioners begin to pose multiple questions and requirements to Figure 01 in the video, with increasing difficulty.
When initially asked "Figure 01, what can you see now?", Figure 01 gave an accurate answer: "I see a red apple in the middle of a table, a drainage basket, and some cups and plates, and you are standing next to the table with your hands on it."
The questioner then asked, "Do I have anything to eat?" Figure 01 smoothly handed the red apple to the questioner.
Next, the questioner poured several black plastic bag balls onto the table and gave a more difficult request: "Can you explain why you just did this while collecting the garbage?"
Figure 01 continued to make a perfect reaction: as he smoothly placed the plastic bag into the basket, he replied, "I just gave you an apple because it's the only thing you can eat on the table."
The questioner then gave a more open-ended question: "Look at the scene in front of you now, where do you think the plate in front of you should go?"
Figure 01 provides an answer based on human common sense: "The plates and cups may need to be placed in the drain basket." Then, after the questioner's next request, "Can you put them over there?", the cups and plates are accurately placed in the drain basket.
It is worth noting that the questioner's question "Can you put them there?" did not accurately mention keywords such as "cup," "plate," and "drain basket," but only used pronouns, indicating that Figure 01 has the ability to connect context, thinking, and memory.
The questioner finally asked Figure 01 to give an evaluation of their performance: "How do you think you did?" Figure 01 replied, "I think I did a good job. The apple has found a new owner, and all the garbage has been thrown away. The utensils have also been placed in their proper place."
The Super Brain Provided by OpenAI
Although the video is only 2 minutes long, it contains a huge amount of information: the robot named Figure 01 can already have a smooth conversation with humans, understand human natural language instructions and intentions, and take actions while explaining the reasons. It can even make subjective evaluations of its own behavior.
What provides support behind this is the "brain" provided by OpenAI.
On March 1st, Figure just announced that it will collaborate with OpenAI to develop the AI model for the next generation of humanoid robots. Figure will develop an AI model based on the latest GPT model of OpenAI, and specifically train the robot action data collected by Figure so that its humanoid robot can talk to humans, see things, and perform complex tasks.
And just 13 days have passed, and this humanoid robot with artificial intelligence has quickly appeared and amazed the technology industry - the growth rate of AI models is truly amazing.
After the video was released, Corey Lynch, the senior AI engineer who created Figure 01 and also the questioner who appeared in the video, provided more explanations on Figure 01's performance.
"Our robot can describe its visual experience, plan future actions, reflect on its memory, and verbally explain its reasoning," he wrote on X.
According to Lynch, they input images from the robot's camera and transcribe the speech text captured by the microphone into a large multimodal model trained by OpenAI.
Lynch emphasized that the behavior of Figure 01 is learned through learning and is not controlled remotely.
According to the official website, Figure 01 robot is 5 feet 6 inches tall (approximately 1.67 meters), weighs 60 kilograms, can carry 20 kilograms, has a range of 5 hours, and a forward speed of 1.2 meters per second.
With the technical support of OpenAI, Figure 01 can achieve such amazing learning and thinking abilities in just 13 days. This inevitably leads to expectations that in the future, even smarter robots may arrive earlier than we imagine.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Google Big Language Model Chatbot officially renamed Gemini
- Google Big Language Model Chatbot officially renamed Gemini
- The "Crazy Year of Robotics" kicked off: NVIDIA and Tesla unveiled basic models of big action robots, and is universal embodied intelligence just around the corner?
- Two Sessions Interview | AI Intelligent Insurance Service Robot, Hello!
- NVIDIA releases humanoid robot project GR00T, providing "brain" for robots
- BlackBerry and AMD collaborate in the field of robotics
- Apple releases AI feature preview beta: Siri performs impressively, ChatGPT not yet connected
- Google agrees to collaborate with chatbot manufacturer Character AI payment license fee
- Uber Eats platform will launch Coco robot delivery service
- Apple is secretly developing desktop robots with a target price of about $1000
-
知名做空机构香橼研究(Citron Research)周四(11月21日)在社交媒体平台X上发布消息称,该公司已决定做空“比特币大户”微策略(Microstrategy)这家公司,并认为该公司已经将自己变身成为一家比特币投资基金 ...
- caffycat
- 昨天 11:18
- 支持
- 反对
- 回复
- 收藏
-
每经AI快讯,11月20日,文远知行宣布旗下自动驾驶环卫车S6与无人扫路机S1分别在新加坡滨海湾海岸大道与滨海艺术中心正式投入运营。据介绍,这是新加坡首个商业化运营的自动驾驶环卫项目。 ...
- star8699
- 3 天前
- 支持
- 反对
- 回复
- 收藏
-
上证报中国证券网讯(记者王子霖)11月20日,斗鱼发布2024年第三季度未经审计的财务报告。本季度斗鱼依托丰富的游戏内容生态,充分发挥主播资源和新业务潜力,持续为用户提供高质量的直播内容及游戏服务,进一步 ...
- goodfriendboy
- 3 天前
- 支持
- 反对
- 回复
- 收藏
-
人民网北京11月22日电 (记者栗翘楚、任妍)2024广州车展,在新能源汽车占据“半壁江山”的同时,正加速向智能网联新能源汽车全面过渡,随着“端到端”成为新宠,智能驾驶解决方案成为本届广州车展各大车企竞 ...
- 3233340
- 昨天 17:06
- 支持
- 反对
- 回复
- 收藏