I went to Salesforce’s Agentforce conference a couple weeks ago and I literally felt like Sarah Connor trying to warn everyone that this is how it starts!
We didn't feel this level of anxiety (or at least it didn't feel as palpable at the time) when social media platforms where emerging, it was actually exciting because initially it fostered so much connection and creativity, but in the long run it has done a ton of damage to people's mental health. Now social is so important that teachers are encouraging kids to become content creators. I think a lot of this anxiety is driven by the fact that we have more first-hand experience in how tech innovations can change the world in big ways and being scared of what that means for us and how we move forward.
I feel like this post is buying big into the hype of AI and not doing enough to look at the constraints of the boom. A lot of publicity with AI and the narrative is being shaped by the companies that benefit from people having an oversized idea about it. AI is still not profitable for most big players (Open AI does not bring independently enough revenue to cover costs), it’s mostly Nvidia selling the shovels for the gold rush and Microsoft integrating AI into existing products. AI capability is being constrained by the need to build out physical infrastructure and new training data, and is requiring heaps of investment with only promised return in the future. The promised return dates are continually being pushed out. Don’t buy into the narrative without first seeing who benefits from it and if it holds up under a microscope.
- US & China cold war escalation does not actually depend much on the reality of the tech; it depends on the expectation / possibility that the tech could be all-powerful. As far as I can tell that expectation exists in sovereigns and the USG and will continue to grow, and is very scary
- I work as a data scientist, and I can see already that something like Cursor can make a significant portion (>50%) of what I was paid to do 2 years ago trivially easy. Labor markets will evolve; this doesn't necessarily mean my job goes away but I think it is obvious that it will be completely different in 5 years
I do agree that the US and China racing to have the best AI will keep inflating the bubble further than it normally would be if just private companies were interested. However, I think the returns and use cases will be smaller than what is being promised, not to say there won’t be any but I think they will not be as pervasive as what technologists imagine.
I think as a data scientist, your job will definitely be more impacted than others and your work process will probably be changed greatly. I think for a lot of jobs, the work process will change, with AI being an efficiency-boosting tool rather than a full labor replacement.
My question arises from my little knowledge of AI. (My exposure to AI is limited to studying literature and philosophy with ChatGPT.)
In what way does AI impose the greatest danger to civilization:
1--AI, like HAL in 2001 /Space Odyssey, develops a will and goals of its own but unlike HAL is also able to prevent any attempts by humans to damage it or thwart its will? Is this a realistic danger at all?
2--Humans will use AI to create more and more powerful weapons which are then used intentionally or get used accidentally (a flaw in the AI system of controls)?
3--AI, with it already-existing ability to target, distort, and deceive, will continue to increase divisions and hatreds so that we will use the weapons we already have to destroy civilization?
I think that 2) and 3) seems like the highest probability to me. 3) is already happening, to some degree; I expect that AI will increase division (although it is possible that AI gets good enough that people stop believing *anything* on the internet, which seems like it would actually be a good thing, and have the opposite effect).
1) Seems like science fiction and so is easy to laugh off, but there are serious people who are very worried about it. I do not know enough to understand the probability.
One additional point that i think is important is that much like the development of the Bomb, it introduces a "race" between the US & China and increases the incentive for each country to attack the other - to me, this is actually the scariest thing, it makes the world order a lot more unstable
Today I tried to use copilot to fill out a very simple Excel sheet where I had to change the value of one column on 20 rows. It was like having a 5 year old tie my shoe. I tried to explain what I wanted it to do about 10 different ways with simpler instructions each time. Eventually I got a knot, said good enough, and did the rest myself. Filling out spreadsheets is not my job. It's just something I have to do in order to do my job. I would be glad to have AI do this work for me so I can focus on my job. Maybe someday it will, but not today.
It certainly feels like the hype is just taking off and the Covid parallel is an appropriate one
One difference for me is this doesn’t feel like something that is happening to us. Unlike a true virus, I believe we have more agency to shape and mold how AI affects us. It won’t be clean, it will be incredibly messy, and we will make mistakes
But we have to try, because the potential for transformation is too incredible not to. It is the arc of progress for us and recreating intelligence is mankind’s sci fi holy grail. We cannot help but try
I remember feeling similarly about how uncomfortable rapid periods of change will be. There is no escaping that. But I do often say to myself: “everything is happening as it should”
I hope your reaction is closer to accurate than mine. I’ve been pretty blown away by how useful it already is at my job (data analysis) and software engineering. It’s made both of those much more accessible to a lot of people. The outcome of that may be that it creates lots of new jobs and value, or that it erodes the earning power of people in those jobs, I’m not sure.
I think the geopolitical risk is very real too and is independent of the actual reality of the tech. If China and the US Governments believe super intelligence is akin to the bomb, they will act accordingly
I went to Salesforce’s Agentforce conference a couple weeks ago and I literally felt like Sarah Connor trying to warn everyone that this is how it starts!
yikesss agentforce sounds really really rough on a lot of levels haha
Wow I should also start linking my amazing tweets that got no love in the footnotes of my essays 😂
We didn't feel this level of anxiety (or at least it didn't feel as palpable at the time) when social media platforms where emerging, it was actually exciting because initially it fostered so much connection and creativity, but in the long run it has done a ton of damage to people's mental health. Now social is so important that teachers are encouraging kids to become content creators. I think a lot of this anxiety is driven by the fact that we have more first-hand experience in how tech innovations can change the world in big ways and being scared of what that means for us and how we move forward.
I feel like this post is buying big into the hype of AI and not doing enough to look at the constraints of the boom. A lot of publicity with AI and the narrative is being shaped by the companies that benefit from people having an oversized idea about it. AI is still not profitable for most big players (Open AI does not bring independently enough revenue to cover costs), it’s mostly Nvidia selling the shovels for the gold rush and Microsoft integrating AI into existing products. AI capability is being constrained by the need to build out physical infrastructure and new training data, and is requiring heaps of investment with only promised return in the future. The promised return dates are continually being pushed out. Don’t buy into the narrative without first seeing who benefits from it and if it holds up under a microscope.
Important points and i hope you are right.
A couple of counterpoints, though --
- US & China cold war escalation does not actually depend much on the reality of the tech; it depends on the expectation / possibility that the tech could be all-powerful. As far as I can tell that expectation exists in sovereigns and the USG and will continue to grow, and is very scary
- I work as a data scientist, and I can see already that something like Cursor can make a significant portion (>50%) of what I was paid to do 2 years ago trivially easy. Labor markets will evolve; this doesn't necessarily mean my job goes away but I think it is obvious that it will be completely different in 5 years
I do agree that the US and China racing to have the best AI will keep inflating the bubble further than it normally would be if just private companies were interested. However, I think the returns and use cases will be smaller than what is being promised, not to say there won’t be any but I think they will not be as pervasive as what technologists imagine.
I think as a data scientist, your job will definitely be more impacted than others and your work process will probably be changed greatly. I think for a lot of jobs, the work process will change, with AI being an efficiency-boosting tool rather than a full labor replacement.
My question arises from my little knowledge of AI. (My exposure to AI is limited to studying literature and philosophy with ChatGPT.)
In what way does AI impose the greatest danger to civilization:
1--AI, like HAL in 2001 /Space Odyssey, develops a will and goals of its own but unlike HAL is also able to prevent any attempts by humans to damage it or thwart its will? Is this a realistic danger at all?
2--Humans will use AI to create more and more powerful weapons which are then used intentionally or get used accidentally (a flaw in the AI system of controls)?
3--AI, with it already-existing ability to target, distort, and deceive, will continue to increase divisions and hatreds so that we will use the weapons we already have to destroy civilization?
I think that 2) and 3) seems like the highest probability to me. 3) is already happening, to some degree; I expect that AI will increase division (although it is possible that AI gets good enough that people stop believing *anything* on the internet, which seems like it would actually be a good thing, and have the opposite effect).
1) Seems like science fiction and so is easy to laugh off, but there are serious people who are very worried about it. I do not know enough to understand the probability.
One additional point that i think is important is that much like the development of the Bomb, it introduces a "race" between the US & China and increases the incentive for each country to attack the other - to me, this is actually the scariest thing, it makes the world order a lot more unstable
Today I tried to use copilot to fill out a very simple Excel sheet where I had to change the value of one column on 20 rows. It was like having a 5 year old tie my shoe. I tried to explain what I wanted it to do about 10 different ways with simpler instructions each time. Eventually I got a knot, said good enough, and did the rest myself. Filling out spreadsheets is not my job. It's just something I have to do in order to do my job. I would be glad to have AI do this work for me so I can focus on my job. Maybe someday it will, but not today.
It certainly feels like the hype is just taking off and the Covid parallel is an appropriate one
One difference for me is this doesn’t feel like something that is happening to us. Unlike a true virus, I believe we have more agency to shape and mold how AI affects us. It won’t be clean, it will be incredibly messy, and we will make mistakes
But we have to try, because the potential for transformation is too incredible not to. It is the arc of progress for us and recreating intelligence is mankind’s sci fi holy grail. We cannot help but try
I remember feeling similarly about how uncomfortable rapid periods of change will be. There is no escaping that. But I do often say to myself: “everything is happening as it should”
Equanimity is the word
I hope your reaction is closer to accurate than mine. I’ve been pretty blown away by how useful it already is at my job (data analysis) and software engineering. It’s made both of those much more accessible to a lot of people. The outcome of that may be that it creates lots of new jobs and value, or that it erodes the earning power of people in those jobs, I’m not sure.
I think the geopolitical risk is very real too and is independent of the actual reality of the tech. If China and the US Governments believe super intelligence is akin to the bomb, they will act accordingly