首页 News 正文

Is GPT-5 coming? OpenAI announced that it has launched the next generation of cutting-edge model training!

海角七号
226 0 0

On Tuesday, May 28th local time, OpenAI has announced the establishment of a security advisory committee by the board of directors responsible for overseeing the direction of AI development.
OpenAI

OpenAI establishes a new security advisory committee, and all members are their own
OpenAI announced on Tuesday the establishment of a new security advisory committee responsible for overseeing "critical" security and safety decisions related to the company's projects and operations. It is worth noting that all members who enter the security advisory committee are internal personnel of the company, including director Bret Taylor (Chairman) Adam Dangelo, Nicole Seligman, and Sam Altman (CEO) have no external personnel. The committee will be responsible for providing recommendations to the entire board on key security and safety decisions related to the OpenAI project and operations.
The primary task of the Security and Safety Committee will be to evaluate and further develop the processes and safeguards of OpenAI within the next 90 days. At the end of the 90 day period, the Safety and Security Committee will share their recommendations with the entire board of directors. After review by the entire board of directors, OpenAI will publicly share updates on adopting recommendations in a secure and secure manner.
The OpenAI board of directors stated that, Aleksander Madry, a technology and policy expert at OpenAI (readiness manager) Lilian Wen (Security System Manager) John Schulman (Head of Alignment Science) Matt Knight (Security Manager) and Jakub Pachocki (Chief Scientist) will also join the committee.
In addition, OpenAI will retain and consult with other security, security, and technology experts to support this work, including former cybersecurity officer Rob Joyce, who provides security consulting for OpenAI, and John Carlin.
In the past few months, Several senior security personnel have resigned from OpenAI's technical team, and some of these former employees have expressed concerns about the declining priority of OpenAI in AI security.
Daniel Kokotajlo, who previously worked in team governance at OpenAI, resigned in April because he lost confidence in OpenAI's "responsible behavior" in releasing increasingly powerful AI, he wrote on his personal blog. In addition, Ilya Sutskever, co-founder and former chief scientist of OpenAI, resigned in May after a long struggle with Altman and his allies, partly due to Altman's eagerness to launch AI powered products and sacrificing security performance, according to reports.
Recently, former DeepMind researcher Jan Leike resigned from his security research position while developing ChatGPT and its predecessor InstrumentGPT at OpenAI. He stated in a series of X platform posts that he believes OpenAI is "not on the right track to address AI security and safety issues.". Meanwhile, AI policy researcher Gretchen Krueger left OpenAI last week and agreed with Leike's point of view, calling on companies to increase accountability and transparency, and "use their technology more cautiously.".
Media reports, in addition to Sutskever Kokotajlo, Leike, and Krueger, at least five of OpenAI's most security conscious employees have either resigned or been forced to leave since the end of last year, including former OpenAI board members Helen Toner and Tasha McCauley. The two recently wrote an article in the media stating that under Altman's leadership, they believe that OpenAI cannot hold themselves accountable.
OpenAI announces launch of next-generation cutting-edge model training AGI vision becomes more pragmatic
It is worth noting that there is also a heavyweight news: OpenAI announced in its announcement that it has recently started training its next-generation cutting-edge models in the hope of surpassing the current GPT-4 model. We expect these systems to bring us to a higher level of capability on the road to Artificial General Intelligence (AGI). Although we are proud to have built and released models that are industry-leading in terms of capability and security, we welcome a heated debate at this important moment.
As a background, OpenAI released GPT-4 in March 2023. Just relying on the sketch drawn by CEO Brockman on the notebook, GPT-4 can build a real website. I believe many investors only began to believe that "AI may really change something" after seeing this behind the scenes.
Since then, OpenAI is gradually launching a series of text, image, audio, and video generation applications around GPT-4. Just two weeks ago, the company released the latest version of GPT-4o. By significantly improving the interaction speed of the model, real-time interaction like real person dialogue has been achieved. Ultimately, the GPT-4o, which is slightly stronger in all aspects, has not yet jumped out of the GPT-4 capability category.
As a result, the outside world has always had high expectations for the GPT-5. However, according to the latest statement from OpenAI, the next generation of large models may not be available to the public until next year. AI models typically require several months or even years of training, and development teams need to make months of fine-tuning before they are released to the public.
As OpenAI builds AI that is closer to the level of human intelligence, companies are becoming more cautious in their statements regarding general artificial intelligence.
Anna Makanju, Vice President of Global Affairs at OpenAI, stated that, The mission of OpenAI is to build universal intelligence that can accomplish tasks at the current level of human cognition.
Earlier, Sam Altman once said, The ultimate goal of OpenAI is to build a more advanced "super intelligence" than humans. He also stated that half of his time was spent brainstorming how to build "super artificial intelligence".
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

海角七号 注册会员
  • 粉丝

    0

  • 关注

    1

  • 主题

    29