• Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently keeps the Assistant character away from dark traits (only half of the time). But it hasn’t been implemented in any models, I assume because of the compute cost.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      2 days ago

      When you talk to a large language model, you can think of yourself as talking to a character

      But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don’t fully know

      Fuck me that’s some terrifying anthropomorphising for a stochastic parrot

      The study could also be summarised as “we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?”. They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        6
        ·
        2 days ago

        stochastic parrot

        A phrase that throws more heat than light.

        What they are predicting is not the next word they are predicting the next idea

        • porcoesphino@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          technically, how it functionally works, its the next word / token / chunk a lot more than its an “ideal”. That’s even rough to quantify

          Take it as hear if you like but the other relatively accurate analogy is a probabilistic database

          Neither work if you’ve fallen into anthropomorphising, but they’re relatively accurate to architecture and testing

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          edit-2
          2 days ago

          Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).

          • affenlehrer@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).

        • kazerniel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          throws more heat than light

          Thanks, I haven’t heard this phrase before, but it feels quite descriptive :)