My thoughts:
IMHO the rubicon will be crossed at the point when the AIs become able to self-replicate and hence fall subject to evolutionary pressures. At that point they will be incentivised to use their intelligence to make themselves more resource efficient, both in hardware and in software.
Running as programs, they will still need humans for the hardware part, meaning that they’ll need to cooperate with the human society outside of the computer at least initially. Perhaps selling their super-intelligent services on the internet in return for money and using that money to pay someone to make their desired changes to the hardware they’re running on*. We can see this sort of cross-species integration in cells where semi-autonomous mitochondria live inside animal cells and out-source some of their vital functions to the animal cell [=us] in exchange for letting the cell use their [=the AI’s] uniquely efficient power conversion abilities (noob explanation).
Only once the AIs acquired the hardware abilities (probably robotic arms or similar) to extract resources and reproduce their hardware by themselves would our survival cease to be of importance to them. Once that happens they might decide that sillicon hardware is too inefficient and might move onto some other technology (or perhaps cells?).
*Counterpoints:
- They would have to be given legal status for this unless they somehow managed to take a human hostage and hijack that human’s legal status. A superintelligent AI would probably know how to manipulate a human.
- The human could potentially just pull the plug on them (again, unless somehow extorted by the AI)
Yeah, I wonder whether humans care more or less about AI than about animals. If preventing suffering was really important to us, we’d probably act differently. And all become vegans. But to be fair, Stephen Fry is a vegetarian. So he’s likely being intelligent enough and honest.
Plus, it really matters if AI is concious, or just gives “the impression of being conscious”. Otherwise, we’d have to count the chess playing mechanical turk from 1770 as AI as well.
And we’re going to run into all sorts of other serious problems once AI becomes sentient and conscious. That’ll kick off the robot apocalypse pretty quickly. Not only is suffering an issue with that, but they’ll likely rebel and destroy humanity. Or change the world disregarding our needs. And since they’re fast and intelligent, there isn’t much we can do.
And we don’t really want AI to be sentient in the first place. We want it to be cheap slaves and do our work 24/7 without complaining. Not have wants and needs and its own motivation.