OpenAI co-founder Ilya Sutskever spoke on a range of topics at NeurIPS, the annual AI conference, Friday afternoon before accepting an award for his contributions to the field.
Sutskever gave his predictions for “superintelligent AI” — AI more capable than humans at many tasks, which he believes will be achieved at some point. Superintelligent AI will be “different, qualitatively” from the AI we have today, Sutskever said — and in some aspects unrecognizable.
“[Superintelligent] systems are actually going to be agentic in a real way,” Sutskever said, as opposed to the current crop of “very slightly agentic” AI. They’ll “reason” and, as a result, become more unpredictable. They’ll understand things from limited data. And they’ll be self-aware, Sutskever believes.
They may want rights, in fact. “It’s not a bad end result if you have AIs and all they want is to co-exist with us and just to have rights,” Sutskever said.
After leaving OpenAI, Sutskever founded Safe Superintelligence (SSI), a lab focused on general AI safety. SSI raised $1 billion in September.