• danhab99@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 hours ago

    Okay I don’t want to directly disagree with you I just want to add a thought experiment:

    If it is a fundamental truth of the universe, a human can literally not program a computer to be smarter than a human (because of some Neil deGrasse Tyson-esq interpretation of entropy), then no matter what AI’s will crash cars as often as real people.

    And the question of who is responsible for the AI’s actions will always be the person because people can take responsibility and AI’s are just machine-tools. This basically means that there is a ceiling to how autonomous self-driving cars will ever be (because someone will have to sit at the controls and be ready to take over) and I think that is a good thing.

    Honestly I’m in this camp that computers can never truly be “smarter” than a person in all respects. Maybe you can max out an ai’s self-driving stats but then you’ll have no points left over for morality, or you can balance the two out and it might just get into less morally challenging accidents more often ¯\_(ツ)_/¯. There are lots of ways to look at this

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      a human can literally not program a computer to be smarter than a human

      I’d add that a computer vision system can’t integrate new information as quickly as a human, especially when limited to vision-only sensing - which Tesla is strangely obsessed with when the cost of these sensors is dropping and their utility has been proven by waymo’s excellent record.

      All in all, I see no reason to attempt to replace humans when we have billions. This is doubly so for ‘artistic’ ai purposes - we have billions of people, let artists create the art.

      show me an AI driven system that can clean my kitchen, or do my laundry. that’d be WORTH it.