• CharlesDarwin@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Get ready for the conservative jagoffs to start painting themselves as the exclusive victims, much like they still do when it comes to social media platforms - "mah kawtehnt iz gettin shadoah banned, ya’ll… " and something something argle bargle Hunter Biden’s “laptop”. And something something argle bargle ermagawd, I wasn’t able to say just whatever crazy shit I wanted on a private platform when it came to Covid conspiracy theories! How is this even America?

    • BadmanDan@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      “In July, the Trump administration signed an executive order barring “woke” AI from federal contracts, demanding that government-procured AI systems demonstrate “ideological neutrality” and “truth seeking.” With the federal government as tech’s biggest buyer, AI companies now face pressure to prove their models are politically “neutral.”

      ^^^ This is where trump face plants. Unless OpenAI straight up programs ChatGPT to lie or avoid answering anything that could be seen as negative for the right. None of this pressure from Trump is gonna matter. The AI is still going to seek the most accurate answers, which almost always leads to a pro liberal position.

      Even if they program it to avoid answering potentially damming question about the right, users just keep pressuring it, and it eventually folds.

        • Sandbar_Trekker@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 hours ago

          Close, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”.

          So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.

  • Lasherz@lemmy.worldM
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    Isn’t it kinda hard for LLMs to be programmed to not just point them towards leftism in that case? Right wing ideology isn’t based in established truths, it’s based on whatever the individual thinks at a given time subject to radical change. Leftism having consistency that science and academia offer means it is the obvious citation for most political facts if the model was going to choose an angle. Exception perhaps being anticommunism since it’s highly established in history books as justification for the Korean and Vietnam wars.

    • daannii@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 hours ago

      Grok is pro Nazi now. Also says more inaccurate things. Supposedly it’s modeled on musks tweets. So it’s not too hard.

  • BadmanDan@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    That’s good. But it’ll still be the same type of answers. The left/liberals always win these ChatGPT debates, simply because the AI has to find you the most accurate FACTS and data it can find. Which almost always leads to at the very least, a pro-liberal answer. If anything, ChatGPT favors Liberal and Pragmatic ideals over anything else.

    This would hurt conservative ChatGPT users, unless OpenAI straight programs it to lie.