$30B+ valuation of new "safe" super-intelligence startup
This was not on my 2025 bingo card, but I'm glad to see it.
Co-founder and former Chief Scientist at OpenAI, Ilya Sutskever, raised more than $1 billion (with a B!) for his new startup, Safe Superintelligence (SSI).
You might remember Ilya as one of the ousters of other-OpenAI-co-founder Sam Altman back in 2023. There was some mystery & drama involved with that whole shakeup (which I won’t dive in to today) - but after everything went down, Ilya left OpenAI and has been off working on something else.
Which, apparently, is a startup “focused on developing AI that outsmarts humans in a safe way” - which, yes thanks, safety first.
This new company is valued at over $30 billion (!!!!), making it one of the most valuable private AI companies in the world.
And something else interesting to note, the company has no market-ready product so this could just be based on hopes & dreams of a safer world… absolutely wild!
I guess in this case Ilya IS the product.
And investors are all in.
But a few questions, if I may:
Will these extra safety guidelines hinder the progress of SSI causing the company to be left behind by other non-‘safe’ companies? I don’t see Elon slowing down for safety.
How exactly can we build a “safe” super-intelligence if it’s supposed to be smarter than us? As my AI expert/partner said in response to this news, “By definition, if it’s smarter than us then it should be able to outsmart us.” 🤯 #yikes
And not to be dramatic after consuming sci-fi content my whole life but… WHAT IF this “safe” super-intelligence figures out that humans are harmful to themselves so the only way to help them is to control them?
Don’t get me wrong, I’m ecstatic to hear that an AI company is actually prioritizing safety. I just have my doubts. But maybe there’s 30 billion reasons I’m wrong.
Oh well. At least this ai-generated “robot wearing a safety vest” image makes me feel safer 🫣