I have been thinking a lot about this set of articles and podcast interview from Leopold Aschenbrenner. He is a wunderkind AI researcher who was fired from OpenAI’s “superalignment” team in what seems like part of the company’s move away from caution about AI and towards more aggressive commercialization of the technology.
His thesis in my words is that:
Progress in AI has been advancing at a rate of ~3x per year. This year’s models are 3x “smarter” than last year’s models.
This progress has been consistent enough that it looks sort of like a “law”, like the famous Moore’s law which states that the number of transistors on an integrated circuit tends to double every two years (note that these “laws” are trends based on market forces, not actual rules, and they do not have to hold forever)
This implies exponential growth in AI’s power that is just now heating up. He likens right now to February 2020, when a small group of people recognized the exponential spread of COVID but mostly the world had not internalized it.
Because of this trend, AI will quite soon be as smart as the smartest humans and then soon after that it will be much smarter (“superintelligence”). His timelines are the next ~10 years (exponential growth is crazy!)
This will change everything. Governments will by necessity get involved and it will become a dangerous and unstable arms race between the US and China. His prediction has a lot of parallels to the development of nuclear weapons.
The interview actually shook me up quite a bit; I had a lingering feeling of dread and confusion for a couple of days after hearing it. This reaction is quite rare for me, even though there are plenty of doomsday predictions out there.
Since then, several friends have expressed their skepticism about his thesis to me:
The amount of energy that the world will need to produce to power all this compute is enormous and the timelines are too aggressive
AI people have predicted similar stuff for decades, this is a trap that smart people fall into
Aschenbrenner is talking his book; he is starting an investment fund focusing on AI and wants to build a name for himself.
I believe all of these things, more or less. And 6 days after listening to the interview I no longer have the same feeling of dread. But I do think the world is changing at an accelerating rate, AI will be a big part of that, and the relative geopolitical stability that a lot of Americans have enjoyed in my lifetime is not a guarantee. And the “adults in the room” do not seem very well equipped to handle problems of this magnitude, whether they be Sam Altman or Joe Biden or Donald Trump.
So, right now I think this is pretty good advice and I’m trying to follow it. It’s also a pretty good way to live even if you believe the world will not change at all.
Not a good time for employees. But great times for businessmen, I think.
Nice write-up. I too share the same sentiment that the pace of AI development scares me, but yet at the same time it seems over-hyped.
One thing that comes to mind is the contrast of Moore's Law and the 3X rate of improvement of AI vs. the rate at which the business world can figure out how to utilize AI in a meaningful and productive way. That effective business utilization rate is certainly not progressing at 3X, it seems to be crawling along slowly.
Another thing that I think could advance the timeline for AI would be if a major military conflict were to arise. Similar to how the US military basically invented the Internet, we could see a new framework or way of adoption for AI that the public sector would never have done on its own. With practically unlimited resources and the power of centralization, we could see a major country's military establish the groundwork (for military purposes) for maximizing AI potential. Then, years later, the public sector could adopt this framework and start to really utilize AI to the extent that many AI influencers envision today.
Probably goes without saying, but I'd prefer the slower AI timeline vs. the one involving a global military conflict!