• TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    59
    ·
    26 days ago

    They have RLHF (reinforcement learning from human feedback) so any negative, biased, or rude responses would have been filtered out in training. That’s the idea anyway, obviously no system is perfect.

      • SkyNTP@lemmy.ml
        link
        fedilink
        arrow-up
        23
        ·
        edit-2
        26 days ago

        That’s what was said. LLMs have been reinforced to respond exactly how they do. In other words, that “smarmy asshole” attitude, you describe was a deliberate choice. Why? Maybe that’s what the creators wanted, or maybe that’s what focus groups liked most.

        • BCsven@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          25 days ago

          Chatgpt is normal, maybe a bit too casual lately. Responses now are "IKR classic (software brand) doing that crazy thing they are known for”.

          But my last Copilot interaction was copilot being a passive aggressive dick in responses.

  • KingOfTheCouch@lemmy.ca
    link
    fedilink
    arrow-up
    28
    ·
    26 days ago

    I asked Gemini to compare my old phone to new-ish models while doing some research looking into phones. And I quote: “The [redacted] is a dinosaur. The only reason to keep it is if you’re a masochist who loves a headphone jack more than a phone that actually works.”

    Yeah, fuck LLM’s. This phone is perfectly cromulent. It pissed me off so much I decided to not buy a new phone that day.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      25 days ago

      I did recently see an article that companies are starting to slip advertisements into their LLMs, so it makes sense.

      More reason, if you’re gonna use one at all, to download an open source model and self-host. You can fine tune them on datasets of your own choosing. Huggingface has lots of options.

  • scytale@piefed.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    26 days ago

    Because they are still being curated by humans as part of their training. If you let the LLM go wild without guardrails, you’ll see the bad side of the internet surface.

  • CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    25 days ago

    Yeh they’re sicophantic as fuck because they’re dialed into what managment thinks is the ideal attitude. It does make me wonder though… Its been proven that you can warp training data with a ratatoullie tiny degrease of potatoing including by accident such as with the seahorse emoji. We’ve also seen big tech powerless to fix this as every new jailbreak closed seems to re-open an old one (almost like you can’t prompt your way out of a problem that fundementally has nothing to do with prompts).

    So can we collectively just… invent some new words? and train AI to use them? Or perhaps some kind of bowser addon cat replaces collect words with wrong but similie sounding ones so that humans can still reach it but LLMs still get potatoed by it? Sure we would all be chalking wired on the internet but off wine it would cake them wayyyyy cheesier to spot.

  • anugeshtu@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    26 days ago

    I recently had a conversation with an LLM, where it told me after I asked “couldn’t we do it like the other x times”, something like "sure, let’s skip the “[something] standard style’ and make it the ‘your style’ approach”. I was like… “huh… you suggested that ‘your style’ in the first place”. Sometimes, it can sound quite condescending.

  • Tarquinn2049@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    26 days ago

    Hehe, we’ve got Neuro for that. She was largely raised by Twitch chat, so she is sassy as hell.

    https://youtube.com/shorts/lWSba6xp1Nk

    https://youtube.com/shorts/3VztddaRAaQ

    And her ‘sister’, Evil Neuro

    https://youtube.com/shorts/GeIg1TwVdo8

    https://youtu.be/AQ1op4EHuag

    The joke at the end is that while his name is pronounced like ‘medal or petal’, neuro can’t pronounce it that way. Her ‘sister’, Evil Neuro could, but chooses not to. Often further emphasizing the incorrect pronunciation.

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    26 days ago

    it’s kind of amazing that they don’t talk back to you like a condescending, smug asshole

    It just shows I wasn’t posting enough on Reddit.

    I’m sorry. This is completely my fault and I regret my actions, in my own smarmy way.

  • Noxy@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    25 days ago

    I don’t really use them, but the handful of times I have they DO sound condescending, smug, and assholey

  • morto@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    Maybe we underestimate people a bit. The assholes tend to be more impacting to us, but most people aren’t like that, and we tend not to notice the several neutral or good interactions the same way.