Summary
Geoffrey Hinton, the “Godfather of AI,” warns of a 10-20% chance that AI could cause human extinction within 30 years, citing the rapid pace of development and the likelihood of creating systems more intelligent than humans.
Hinton emphasized that such AI could evade human control, likening humans to toddlers compared to advanced AI.
He called for urgent government regulation, arguing that corporate profit motives alone cannot ensure safety.
This stance contrasts with fellow AI expert Yann LeCun, who believes AI could save humanity rather than threaten it.
I’m confused, is AI a dumb parrot that is good at spitting out convincing bullshit, or is AI a sentient genius that will destroy us all? Every article and comment about it is one or the other, and it can’t be both.
LLMs are the former. And we’re probably at least one breakthrough away from building something that can actually think. Not helping things is that we don’t know what thinking actually is.