Summary

Geoffrey Hinton, the “Godfather of AI,” warns of a 10-20% chance that AI could cause human extinction within 30 years, citing the rapid pace of development and the likelihood of creating systems more intelligent than humans.

Hinton emphasized that such AI could evade human control, likening humans to toddlers compared to advanced AI.

He called for urgent government regulation, arguing that corporate profit motives alone cannot ensure safety.

This stance contrasts with fellow AI expert Yann LeCun, who believes AI could save humanity rather than threaten it.

  • prime_number_314159@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    21 hours ago

    An AI actually more intelligent than humans is probably not a huge threat, many of the mutual cooperation things that make humans work semi-well together apply to an AI. Likewise, an LLM is unlikely to cause any problems just by existing.

    Instead, I think the big danger is something like an LLM that convinces people that its smarter than they are (probably by being able to recite more facts than they can, or offering copy/paste explanations of advance topics), and is then put into more and more places of trust.

    Once it’s there, we have the open possibility that something “weird” happens, and many many devices, controls, etc simultaneously react poorly to novel inputs. Depending on the type of systems, and how widespread an issue it is, that could cause extremely large problems. Military systems might be the worst possibility for this.

  • WeirdGoesPro@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    I’m confused, is AI a dumb parrot that is good at spitting out convincing bullshit, or is AI a sentient genius that will destroy us all? Every article and comment about it is one or the other, and it can’t be both.

    • Saledovil@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      LLMs are the former. And we’re probably at least one breakthrough away from building something that can actually think. Not helping things is that we don’t know what thinking actually is.

  • Tyfud@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    AI will wipe out humanity, but not directly. It will take the form of causing massive acceleration of climate change, draining the potable water supply, etc.

    All for our hubris.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    Hinton emphasized that such AI could evade human control, likening humans to toddlers compared to advanced AI. … This stance contrasts with fellow AI expert Yann LeCun, who believes AI could save humanity rather than threaten it.

    Contrary to what the reporting suggests, these views don’t seem to be contradicting each other. Yann LeCun says AI could save humanity, while Geoffrey Hinton says that, in the absence of strong government regulation, AI companies will not develop it safely. Both of these things can be true.

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    2 days ago

    I am so much less worried about being wiped out by artificial intelligence than the kind that evolved biologically.

  • MrNesser@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    There’s a few ways AI could go

    1. It’s completely indifferent to us, ignores us completely and does its own thing
    2. Someone takes a shot and we end up in a war ala the matrix
    3. AI sees us as needing it and takes over like it does in the polity novels

    Realistically 1 is best case as 3 would cause civil war as governments lose power.

    The only problem with 1 is it inevitably leads to 2 as we are ignorant idiots

  • mercphilby@discuss.online
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    1 day ago

    I’m in favor of human extinction. It’s not personal, it’s just better for the universe if we stop existing.

    • HubertManne@moist.catsweat.com
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      this how I feel. whelp there is the definite extinction we are accelerating into and then the possibility of an existentail extinction from other sources.

  • Taleya@aussie.zone
    link
    fedilink
    English
    arrow-up
    34
    ·
    2 days ago

    AI will not cause human extinction.

    Humans will cause human extinction by being complete dumbfucks. AI may simply be the tool we use

  • dan1101@lemm.ee
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    1 day ago

    Unless somebody makes tens of thousands of super soldier bodies that require no recharging and are powered by AI CPUs I don’t think we have a lot to worry about.

    I don’t think AI will either destroy or save humanity, it’s just a tool.

    Sure accidents may be caused by connecting AI to dangerous things like power plants, but we can still pull the plug.