Ilya Sutskever, OpenAI’s co-founder and erstwhile main scientist, is starting a caller AI institution focused connected safety. In a station connected Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one extremity and 1 product:” creating a harmless and almighty AI system.
The announcement describes SSI arsenic a startup that “approaches information and capabilities successful tandem,” letting the institution rapidly beforehand its AI strategy portion inactive prioritizing safety. It besides calls retired the outer unit AI teams astatine companies similar OpenAI, Google, and Microsoft often face, saying the company’s “singular focus” allows it to debar “distraction by absorption overhead oregon merchandise cycles.”
“Our concern exemplary means safety, security, and advancement are each insulated from short-term commercialized pressures,” the announcement reads. “This way, we tin standard successful peace.” In summation to Sutskever, SSI is co-founded by Daniel Gross, a erstwhile AI pb astatine Apple, and Daniel Levy, who antecedently worked arsenic a subordinate of method unit astatine OpenAI.
Last year, Sutskever led the propulsion to oust OpenAI CEO Sam Altman. Sutskever near OpenAI successful May and hinted astatine the commencement of a caller project. Shortly aft Sutskever’s departure, AI researcher Jan Leike announced his resignation from OpenAI, citing information processes that person “taken a backseat to shiny products.” Gretchen Krueger, a argumentation researcher astatine OpenAI, also mentioned information concerns erstwhile announcing her departure.
As OpenAI pushes guardant with partnerships with Apple and Microsoft, we apt won’t spot SSI doing that anytime soon. During an interrogation with Bloomberg, Sutskever says SSI’s archetypal merchandise volition beryllium harmless superintelligence, and the institution “will not bash thing else” until then.