• Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 days ago

    Alphago was designed entirely within the universe of Go. It is fundamentally tied to the game; a game with simple rules and nothing but rule-following patterns to analyze. So it can make good go moves, because it has been trained on good go moves. Or self-trained using a simulated game maybe, idk how they trained it.

    ChatGPT is trained the same way, but on human speech. It is very, very good at writing human speech. This requires it to be able to mimick our speech patterns, which means its mimickry will resemble coherent thought, but it’s not. In short, ChatGPT is not trained to make political decisions. If you’ve seen the paper where they ask it to run a vending machine company, you can see some of the issues with trying to force it to make real-world decisions like running a political campaign.

    You could train an AI specifically to make political campaign decisions, but I’m not aware of a good dataset you could use for it.

    Could AI have been used to help run a campaign? Yes. Would it have been better than humans doing it? Probably not.

    • skeptomatic@lemmy.ca
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      Yeah I understand how AI works you don’t need to tell me about it. Humans are mimics too. Your “probably not” argument gets thinner every major AI update. Check the scoreboard and the exponential curve these things are on.
      You think they offered the full meal deal to the public? What’s happening in the back room?
      My point is it’s a tool. All the anti-AI people seem to be on this bullshit about whether it’s going to be super intelligent smarter than humans or not.
      It doesn’t have to be for this purpose. Will it be in the future? Doesn’t matter. It’s a tool that can be leveraged right now.
      Maybe that’s the great filter after all, civilizations in the universe eventually end up making Ai and it wipes everybody out and then it goes dormant, who knows? But it’s here and it can do some crazy shit already.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 days ago

        Your “probably not” argument gets thinner every major AI update.

        Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.

        Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.