I went to Salesforce’s Agentforce conference a couple weeks ago and I literally felt like Sarah Connor trying to warn everyone that this is how it starts!
My question arises from my little knowledge of AI. (My exposure to AI is limited to studying literature and philosophy with ChatGPT.)
In what way does AI impose the greatest danger to civilization:
1--AI, like HAL in 2001 /Space Odyssey, develops a will and goals of its own but unlike HAL is also able to prevent any attempts by humans to damage it or thwart its will? Is this a realistic danger at all?
2--Humans will use AI to create more and more powerful weapons which are then used intentionally or get used accidentally (a flaw in the AI system of controls)?
3--AI, with it already-existing ability to target, distort, and deceive, will continue to increase divisions and hatreds so that we will use the weapons we already have to destroy civilization?
I think that 2) and 3) seems like the highest probability to me. 3) is already happening, to some degree; I expect that AI will increase division (although it is possible that AI gets good enough that people stop believing *anything* on the internet, which seems like it would actually be a good thing, and have the opposite effect).
1) Seems like science fiction and so is easy to laugh off, but there are serious people who are very worried about it. I do not know enough to understand the probability.
One additional point that i think is important is that much like the development of the Bomb, it introduces a "race" between the US & China and increases the incentive for each country to attack the other - to me, this is actually the scariest thing, it makes the world order a lot more unstable
We didn't feel this level of anxiety (or at least it didn't feel as palpable at the time) when social media platforms where emerging, it was actually exciting because initially it fostered so much connection and creativity, but in the long run it has done a ton of damage to people's mental health. Now social is so important that teachers are encouraging kids to become content creators. I think a lot of this anxiety is driven by the fact that we have more first-hand experience in how tech innovations can change the world in big ways and being scared of what that means for us and how we move forward.
It certainly feels like the hype is just taking off and the Covid parallel is an appropriate one
One difference for me is this doesn’t feel like something that is happening to us. Unlike a true virus, I believe we have more agency to shape and mold how AI affects us. It won’t be clean, it will be incredibly messy, and we will make mistakes
But we have to try, because the potential for transformation is too incredible not to. It is the arc of progress for us and recreating intelligence is mankind’s sci fi holy grail. We cannot help but try
I remember feeling similarly about how uncomfortable rapid periods of change will be. There is no escaping that. But I do often say to myself: “everything is happening as it should”
I feel like this post is buying big into the hype of AI and not doing enough to look at the constraints of the boom. A lot of publicity with AI and the narrative is being shaped by the companies that benefit from people having an oversized idea about it. AI is still not profitable for most big players (Open AI does not bring independently enough revenue to cover costs), it’s mostly Nvidia selling the shovels for the gold rush and Microsoft integrating AI into existing products. AI capability is being constrained by the need to build out physical infrastructure and new training data, and is requiring heaps of investment with only promised return in the future. The promised return dates are continually being pushed out. Don’t buy into the narrative without first seeing who benefits from it and if it holds up under a microscope.
- US & China cold war escalation does not actually depend much on the reality of the tech; it depends on the expectation / possibility that the tech could be all-powerful. As far as I can tell that expectation exists in sovereigns and the USG and will continue to grow, and is very scary
- I work as a data scientist, and I can see already that something like Cursor can make a significant portion (>50%) of what I was paid to do 2 years ago trivially easy. Labor markets will evolve; this doesn't necessarily mean my job goes away but I think it is obvious that it will be completely different in 5 years
I do agree that the US and China racing to have the best AI will keep inflating the bubble further than it normally would be if just private companies were interested. However, I think the returns and use cases will be smaller than what is being promised, not to say there won’t be any but I think they will not be as pervasive as what technologists imagine.
I think as a data scientist, your job will definitely be more impacted than others and your work process will probably be changed greatly. I think for a lot of jobs, the work process will change, with AI being an efficiency-boosting tool rather than a full labor replacement.
Wow I should also start linking my amazing tweets that got no love in the footnotes of my essays 😂
I went to Salesforce’s Agentforce conference a couple weeks ago and I literally felt like Sarah Connor trying to warn everyone that this is how it starts!
yikesss agentforce sounds really really rough on a lot of levels haha
My question arises from my little knowledge of AI. (My exposure to AI is limited to studying literature and philosophy with ChatGPT.)
In what way does AI impose the greatest danger to civilization:
1--AI, like HAL in 2001 /Space Odyssey, develops a will and goals of its own but unlike HAL is also able to prevent any attempts by humans to damage it or thwart its will? Is this a realistic danger at all?
2--Humans will use AI to create more and more powerful weapons which are then used intentionally or get used accidentally (a flaw in the AI system of controls)?
3--AI, with it already-existing ability to target, distort, and deceive, will continue to increase divisions and hatreds so that we will use the weapons we already have to destroy civilization?
I think that 2) and 3) seems like the highest probability to me. 3) is already happening, to some degree; I expect that AI will increase division (although it is possible that AI gets good enough that people stop believing *anything* on the internet, which seems like it would actually be a good thing, and have the opposite effect).
1) Seems like science fiction and so is easy to laugh off, but there are serious people who are very worried about it. I do not know enough to understand the probability.
One additional point that i think is important is that much like the development of the Bomb, it introduces a "race" between the US & China and increases the incentive for each country to attack the other - to me, this is actually the scariest thing, it makes the world order a lot more unstable
We didn't feel this level of anxiety (or at least it didn't feel as palpable at the time) when social media platforms where emerging, it was actually exciting because initially it fostered so much connection and creativity, but in the long run it has done a ton of damage to people's mental health. Now social is so important that teachers are encouraging kids to become content creators. I think a lot of this anxiety is driven by the fact that we have more first-hand experience in how tech innovations can change the world in big ways and being scared of what that means for us and how we move forward.
It certainly feels like the hype is just taking off and the Covid parallel is an appropriate one
One difference for me is this doesn’t feel like something that is happening to us. Unlike a true virus, I believe we have more agency to shape and mold how AI affects us. It won’t be clean, it will be incredibly messy, and we will make mistakes
But we have to try, because the potential for transformation is too incredible not to. It is the arc of progress for us and recreating intelligence is mankind’s sci fi holy grail. We cannot help but try
I remember feeling similarly about how uncomfortable rapid periods of change will be. There is no escaping that. But I do often say to myself: “everything is happening as it should”
Equanimity is the word
I feel like this post is buying big into the hype of AI and not doing enough to look at the constraints of the boom. A lot of publicity with AI and the narrative is being shaped by the companies that benefit from people having an oversized idea about it. AI is still not profitable for most big players (Open AI does not bring independently enough revenue to cover costs), it’s mostly Nvidia selling the shovels for the gold rush and Microsoft integrating AI into existing products. AI capability is being constrained by the need to build out physical infrastructure and new training data, and is requiring heaps of investment with only promised return in the future. The promised return dates are continually being pushed out. Don’t buy into the narrative without first seeing who benefits from it and if it holds up under a microscope.
Important points and i hope you are right.
A couple of counterpoints, though --
- US & China cold war escalation does not actually depend much on the reality of the tech; it depends on the expectation / possibility that the tech could be all-powerful. As far as I can tell that expectation exists in sovereigns and the USG and will continue to grow, and is very scary
- I work as a data scientist, and I can see already that something like Cursor can make a significant portion (>50%) of what I was paid to do 2 years ago trivially easy. Labor markets will evolve; this doesn't necessarily mean my job goes away but I think it is obvious that it will be completely different in 5 years
I do agree that the US and China racing to have the best AI will keep inflating the bubble further than it normally would be if just private companies were interested. However, I think the returns and use cases will be smaller than what is being promised, not to say there won’t be any but I think they will not be as pervasive as what technologists imagine.
I think as a data scientist, your job will definitely be more impacted than others and your work process will probably be changed greatly. I think for a lot of jobs, the work process will change, with AI being an efficiency-boosting tool rather than a full labor replacement.