I just listened to this interview of Ilya Sutskever, the cofounder of OpenAI who was recently at the center of the company’s corporate drama (he was the decisive board vote originally ousting CEO Sam Altman, before apparently changing his mind and advocating for Altman’s return).
The interview is from the spring, well before the intrigue, and it is well done. There is very little fluff and the interviewer can speak intelligently and ask real questions about AI.
Ilya has a gravitas and seriousness about him that is very compelling. The things I found most interesting from his thoughts were:
He doesn’t think hardware is much of a differentiator for the performance of AI models. It matters for cost but is not a bottleneck to actual progress
Because of that, he thinks a disruption in Taiwan (where most state of the art chips are manufactured) would not have a catastrophic impact on the development of AI, it would instead be a recoverable setback
He thinks a significant number of humans will choose to merge with AI, a prospect he finds “tempting.”
As models get better you need “alignment” (the degree to which the model’s goals are the same as humanity’s) to improve faster than performance improves. This will be a significant challenge if and when models are smarter than humans or can hide their motives
The interviewer asks him how AI will start to influence “atoms, not just bits,” i.e. how will it start to impact the physical world. He implies that this will be done (at least at first) by AI acting through humans to change the physical world. For example, AI could encourage someone to rearrange their apartment for a more efficient setup, and the physical action will be carried out by the human.
He says he has a very wide range in his prediction for how much of GDP will be AI-driven in 2030, but feels confident it will not be “very small.” If it is very small, he expects that would be because the reliability of the models is not where it needs to be for them to be economically useful.
It is good to hear directly from one of the key figures in the current AI wave. Also, hearing him speak made it hard for me to imagine him acting impulsively to remove Sam Altman from the Board of OpenAI, so I am very curious to learn more about his motivations there, and to understand what made him change his mind.