OpenAI Co-Founder Sutskever Starts His Own Company For Safe AI Future: Know More
OpenAI Co-Founder Sutskever Starts His Own Company For Safe AI Future: Know More
Sutskever left his role as the co-founder and Chief scientist at OpenAI in May this year and now has decided to work on safe AI tech.

OpenAI co-founder left the company in May this year to focus on his own ambitions in the field of AI and now he has officially confirmed what he is going to do next. Ilya Sutskever has announced his new company called Safe Superintelligence or SSI that promises to tackle the ‘most important problem of our time.’

He is starting this AI company with a sole and unified goal to keep superintelligence safe. “We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” Sutskever mentioned in his post on X this week.

Start Of A Safer AI Future?

Sutskever decided to leave OpenAI, the company he co-founded with Sam Altman and others earlier this year. He has even been accused of being the main orchestrator of getting Altman ousted, which is now a history that neither of them would like to talk about as they look at the future.

In some ways, he takes a dig at his former company which was heading towards building shiny products rather than focus on the safety and ethics of the technology.

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.” It is interesting to see him talk about safety but not without keeping a tight grip on the advancements as well. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,” he adds.

Sutskever has already got Daniel Gross and Daniel Levy working alongside at the company, which claims is based out of Palo Alto in the US and Tel Aviv in Israel. The focus on safe intelligence has been mentioned by other staff and engineers at OpenAI, who are leaving the firm in a bid to keep the AI systems in check rather than go astray in search of quick success.

OpenAI is going after the shiny products and not focusing on safety of AI systems and processes, as pointed out by Jan Leike, former AI researcher at OpenAI who quit the company and shared these details in the public. Leike also lashed out at the company, warning them of the need to control the rapid advancement in AI that can turn into a dangerous situation for all of humanity.

Sutskever seems to be headed in that path to secure the AI systems and it will be hardly surprising to see Leike join their ranks in the coming weeks.

What's your reaction?

Comments

https://hapka.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!