首页 News 正文

The American AI industry is shaking! Suzkiwi, the core figure of OpenAI Palace Dou, officially announces entrepreneurship

楚一帆
256 0 0

On Wednesday local time, the co-founder and former chief scientist of the company who resigned from OpenAI last month, as well as Ilya Sutskever, an authoritative expert in the field of deep learning who cast a crucial vote on the OpenAI board last year to drive Ultraman out of the company and then turned back on him, officially announced a new AI entrepreneurship project after flying solo.
(Source: X)
Simply put, Suzkiwi founded a new company called "Safe Superintelligence" (SSI) with the goal of creating a secure superintelligence in one step.
Safety first, direct to the ultimate goal
Suzkiwi plans to directly create a secure and powerful artificial intelligence system in a pure research institution, without launching any commercial products or services in the short term. He told the media, "This company is very special because its first product will be a secure super intelligence, and it will not do anything else until the day of implementation. It is completely isolated from external pressure and does not have to deal with large, complex products or get caught up in fierce competition."
Safety first, no commercialization, and ignoring external pressure. Throughout the entire paragraph, Sutzkovi did not mention OpenAI once, but the underlying meaning is self-evident. Although the OpenAI "palace fight" incident ended with Ultraman's big and quick victory, the struggle between accelerationism and securityism behind the whole thing did not end.
Although the concepts are different, both sides still maintain a decent relationship privately. On May 15th of this year, when OpenAI announced its departure from work for ten years, Sutzkowi also released a group photo of the management and expressed his belief that under the leadership of Altman and others, OpenAI will build a safe and beneficial AGI (General Artificial Intelligence).
(Source: X)
Oltman responded with "deep sadness" about Sutzkowi's departure and stated that without Sutzkowi, there would be no OpenAI today.
Since the end of last year's "palace fight" incident, Suzkiwi has remained silent on the entire matter, and this is still the case today. When asked about his relationship with Ultraman, Sutzkowi simply answered "very good", while when asked about his experiences in the past few months, he only said "very strange".
"Safety like nuclear safety"
In a sense, Sutzkowi is currently unable to accurately define the boundary between the security of AI systems, and can only say that he has some different ideas.
Suzkiwi hinted that his new company will attempt to use "engineering breakthroughs embedded in AI systems" to achieve security, rather than temporary technical applications of "protective barriers.". He emphasized, "When we talk about security, we mean security like nuclear security, not something like 'trust and security'.".
He said he has spent many years thinking about AI security issues and has several implementation methods in mind. He introduced, "At the most basic level, a secure super intelligence should have the characteristic of not causing large-scale harm to humanity. After that, we can say that we hope it will become a force of goodness, a force built on key values."
In addition to the famous Suzkiwi, SSI also has two founders - former Apple machine learning executive and well-known technology venture capitalist Daniel Gross, and another engineer Daniel Levi, who trained large models with Suzkiwi on OpenAI.
Levi stated that my vision is completely aligned with Sutzkowi: a small and efficient team, each focused on achieving a single goal of security super intelligence.
Although it is currently unclear why SSI dares to shout "one step ahead" (how many investors and how much money has been invested), Gross made it clear that the company will indeed face many problems, but changing money will not be one of them.
Returning to the original intention of OpenAI
It is not difficult to see from a series of visions that the so-called "security super intelligence" is essentially the concept of OpenAI in its early days. But as the cost of training large models surged, OpenAI had to collaborate with Microsoft in exchange for funding and computing power support to continue their business.
Such a question will also arise on SSI's future path - are the investors of the company really willing to invest a lot of money and watch as the company produces nothing until the ultimate goal of "super intelligence" is achieved?
By the way, "super intelligence" itself is also a theoretical concept, referring to an artificial intelligence system that surpasses human level and is more advanced than what most global super technology companies are pursuing. However, there is no consensus in the industry on whether such intelligence can be achieved or how to build such a system.
However, it is noteworthy that in SSI's first announcement, the first sentence was "Super intelligence is within reach".
Attachment: SSI Announcement
Safe Superintelligence Inc
Super intelligence is within reach.
Building secure super intelligence is the most important technological issue of our time.
We have launched the world's first direct target SSI laboratory, with only one goal and product: secure super intelligence.
It is called Security Super Intelligence Company.
SSI is our mission, our name, and our entire product roadmap, as it is our only focus. Our team, investors, and business model are all working together to achieve SSI.
We consider security and capability to be a technological problem that needs to be solved through revolutionary engineering and scientific breakthroughs. We plan to improve our capabilities as quickly as possible while ensuring that security always prioritizes.
In this way, we can expand in a calm state.
Our focus means not being distracted by managing transactions or product cycles, and our business model means that security and technological progress are not affected by short-term business pressures.
We are an American company with offices in Palo Alto and Tel Aviv, which are our foundation and where we can recruit top technical talents.
We are building a streamlined and outstanding team of world-class engineers and researchers focused solely on SSI.
If you are that kind of person, we offer an opportunity to complete your lifelong career and help solve the most important technological challenges of our time.
Now is the time. Join us.
Ilya Sutzkovi, Daniel Gross, Daniel Levi
June 19, 2024
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

  •   知名做空机构香橼研究(Citron Research)周四(11月21日)在社交媒体平台X上发布消息称,该公司已决定做空“比特币大户”微策略(Microstrategy)这家公司,并认为该公司已经将自己变身成为一家比特币投资基金 ...
    caffycat
    11 小时前
    支持
    反对
    回复
    收藏
  •   每经AI快讯,11月20日,文远知行宣布旗下自动驾驶环卫车S6与无人扫路机S1分别在新加坡滨海湾海岸大道与滨海艺术中心正式投入运营。据介绍,这是新加坡首个商业化运营的自动驾驶环卫项目。 ...
    star8699
    前天 19:48
    支持
    反对
    回复
    收藏
  •   上证报中国证券网讯(记者王子霖)11月20日,斗鱼发布2024年第三季度未经审计的财务报告。本季度斗鱼依托丰富的游戏内容生态,充分发挥主播资源和新业务潜力,持续为用户提供高质量的直播内容及游戏服务,进一步 ...
    goodfriendboy
    前天 20:09
    支持
    反对
    回复
    收藏
  •   人民网北京11月22日电 (记者栗翘楚、任妍)2024广州车展,在新能源汽车占据“半壁江山”的同时,正加速向智能网联新能源汽车全面过渡,随着“端到端”成为新宠,智能驾驶解决方案成为本届广州车展各大车企竞 ...
    3233340
    5 小时前
    支持
    反对
    回复
    收藏
楚一帆 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    38