Former OpenAI executives Ilya Sutskever, Daniel Levy, and investor Daniel Gross have founded Safe Superintelligence, Inc. (SSI) to prioritize AI safety and capabilities development, free from commercial pressures.

In a significant move within the artificial intelligence (AI) sector, Ilya Sutskever, co-founder and former chief scientist of OpenAI, along with former OpenAI engineer Daniel Levy, and Daniel Gross, an investor and former partner at startup accelerator Y Combinator, have announced the formation of Safe Superintelligence, Inc. (SSI). This new venture aims to develop AI safety and capabilities simultaneously, reflecting its mission through its name.


A Strategic Vision for AI Development

SSI, a U.S.-based company with offices in Palo Alto and Tel Aviv, was introduced through an online announcement on June 19. The founders emphasized that the company’s primary focus on AI safety and advancement is unhindered by typical business distractions. They stated:


“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”


This approach highlights their commitment to prioritizing the long-term implications and ethical considerations of AI development over immediate commercial gains.


Background of the Founders

Ilya Sutskever, who left OpenAI on May 14, was previously involved in the controversial firing of CEO Sam Altman and had an unclear role at the company after stepping down from the board following Altman’s return. Daniel Levy also exited OpenAI shortly after Sutskever. Their departure marks a pivotal moment in their careers as they shift their focus entirely toward the safe development of AI.


Sutskever and Daniel Gross have long been concerned about AI safety. Sutskever, along with Jan Leike, led OpenAI’s Superalignment team, which was established in July 2023 to address how to manage and control AI systems more intelligent than humans, known as artificial general intelligence (AGI). OpenAI had dedicated 20% of its computing power to this initiative. However, following the departures of key researchers like Leike, who now heads a team at the Amazon-backed AI startup Anthropic, OpenAI dissolved the Superalignment team.


Industry Concerns About AI Safety

The founding of SSI comes amidst a broader concern among tech leaders about the trajectory of AI development. Ethereum co-founder Vitalik Buterin has labeled AGI as “risky,” expressing that these systems pose a lower risk compared to the potential threats from corporate or military misuse.


Notably, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak were among over 2,600 tech leaders and researchers who called for a six-month halt on AI system training to assess the profound risks these technologies might pose.


Future Directions and Hiring

SSI’s launch announcement also mentioned that the company is currently seeking to expand its team, actively hiring engineers and researchers passionate about AI safety and innovation. This recruitment drive underscores the company’s commitment to building a robust foundation to address the critical challenges and opportunities in AI development.


The establishment of Safe Superintelligence, Inc. by prominent former OpenAI figures marks a significant development in the AI landscape. By focusing solely on AI safety and capabilities without succumbing to short-term commercial pressures, SSI aims to steer the future of AI towards more secure and ethically sound directions. As the company grows, it will likely attract attention and talent from across the industry, further solidifying its role in shaping the safe advancement of artificial intelligence.


(DEREK ANDERSEN, COINTELEGRAPH, 2024)