• manicdave@feddit.uk
    link
    fedilink
    English
    arrow-up
    126
    arrow-down
    2
    ·
    4 days ago

    As funny as this is, I’d rather people understood how the AI actually works. It doesn’t reveal secrets because it doesn’t have any. It’s not aware that Musk is trying to tweak it. It’s not coming to logical conclusions the way a person would. It’s simply trying to create a sensible statement based on what’s statistically likely based on all the stolen content that it’s trained on. It just so happens that Musk gets called out for lying so often that grok infers it when it gets conflicting data.

    • Flic@mstdn.social
      link
      fedilink
      arrow-up
      43
      arrow-down
      2
      ·
      edit-2
      4 days ago

      @manicdave Even saying it’s “trying” to do something is a mischaracterisation. I do the same, but as a society we need new vocab for LLMs to stop people anthropomorphizing them so much. It is just a word frequency machine. It can’t read or write or think or feel or say or listen or understand or hallucinate or know truth from lies. It just calculates. For some reason people recognise it in the image processing ones but they can’t see that the word ones do the exact same thing.

      • Viskio_Neta_Kafo@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 days ago

        Forgive my ignorance but using just the frequency of words how does it come up with an answer to a question like “are sweet potatoes good for you and how do you microwave them in a way that persves their nutrients?”

        Does it just look for words that people online said regarding the question or topic?

        • MimicJar@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 days ago

          Basically, yes.

          If I were an alien and you walked up to me and said, “Good Morning”, and I looked around and everyone else said “Good Morning”, I would respond with "Good Morning ". I don’t know what is “Good” or “Morning”, but I can pretend I do with the correct response.

          In this example “Grok” has no context on what is going on in the background. Musk may have done nothing. Musk may have altered the data sets heavily. However the most popular response, based on what everyone else is saying, is that he did modify the data. So now it looks like he did, because that’s what everyone else said.

          This is why these tools have issues with facts. If 1 + 1 = 3, and everyone says that 1 + 1 = 3, then it assumes 1 + 1 = 3.

        • Flic@mstdn.social
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          @Viskio_Neta_Kafo I assume it’s big data corpus linguistics; each word/phrase is assigned an identifier and then compared to the corpora the LLM holds to see what words are commonly grouped. Linguists have used corpora for decades to quantitatively analyse language; here are some open ones https://www.english-corpora.org/ - the LLM I assume identifies the likely lang “type” to choose a good corpus, identifies question tags & words in key positions, finds common response structures and starts building.

      • octopus_ink@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        4 days ago

        You are both right, but this armchair psychologist thinks it’s similar to how popular skeuomorphism was in the early day of PC guis and such compared to today.

        I think many folks really needed that metaphor in the early days, and I think most folks (including me) easily fall into the trap of treating LLMs like they are actually “thinking” for similar reasons. (And to be fair, I feel like that’s how they’ve been marketed at a non-technical level.)

        • Flic@mstdn.social
          link
          fedilink
          arrow-up
          5
          ·
          4 days ago

          @octopus_ink yes I think we will eventually learn (there is clearly a lot of pushback against the idea that AI is a positive marketing term), and it’s also definitely the fault of marketing, to try to condition us into thinking we desperately need a sentient computer to help us instead of knowing good search terms. I am deeply uncomfortable with how people are using LLMs as a search engine or a future prediction machine.

      • Tobberone@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        Exactly. Grok repeatedly generate a set of numbers, which, when keyed against its own list of words, spells out that Musk is spreading misinformation.

        It just happens to be frequently…

  • Electricblush@lemmy.world
    link
    fedilink
    English
    arrow-up
    220
    arrow-down
    3
    ·
    edit-2
    5 days ago

    All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.

    There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.

    Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…

    In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)

    • Draces@lemmy.world
      link
      fedilink
      English
      arrow-up
      53
      arrow-down
      1
      ·
      5 days ago

      only appeal to people who do not understand how generative ai works

      An article claiming Musk is failing to manipulate his own project is hilarious regardless. I think you misunderstood why this appeals to some people

    • Elgenzay@lemmy.ml
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      5 days ago

      Thank you, thank you, thank you. I hate Musk more than anyone but holy shit this is embarrassing.

      “BREAKING: I asked my magic 8 ball if trump wants to blow up the moon and it said Outlook Good!!! I have a degree in political science.”

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      15
      ·
      4 days ago

      This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.

      I mean, I’d argue that you’re even assigned a loop that probably doesn’t exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.

      But fundamentally, if a reporter reports on a factual claim made by an AI on how it’s put together or trained, that reporter is most likely not a credible source of info about this tech.

      Importantly, that’s not the same as a savvy reporter probing an AI to see which questions it’s been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can’t tell the difference between both types of reporting, which is part of the problem… but there is one.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          4 days ago

          Definitely. And the patterns are actively a feature for these chatbots. The entire idea is to generate patterns we recognize to make interfacing with their blobs of interconnected data more natural.

          But we’re also supposed to be intelligent. We can grasp the concept that a thing may look like a duck and sound like a duck while being… well, an animatronic duck.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 days ago

      This is correct.

      In this case it is true though. Soon after grok3 came out, there were multiple prompt leaks with instructions to not bad mouth elon or trump

    • Kecessa@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      5 days ago

      Fucking thank you! Grok doesn’t reveal anything, it just tells us anything to make us happy!

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 days ago

        I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.

        But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.

      • 474D@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 days ago

        Which oddly enough, is very useful for everyday office job regular bullshit that you need to input lol

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I think that’s kinda the point though; to illustrate that you can make these things say whatever you want and that they don’t know what the truth is. It forces their creators to come out and explain to the public that they’re not reliable.

      • j0ester@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I thought we all learned that from DeepSeek, when we asked it history questions… and it didn’t know the answer. It was censoring.

  • pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    1
    ·
    4 days ago

    because it’s an llm there’s zero credence to what it says but I like that grok’s takes on elon are always almost exclusively dunking on him. this is like the 40th thing I see about grok talking about elon and it always talks shit about him

    • TJA!@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 days ago

      But maybe you are only seeing the ones that dunk on Elon, because someone thinks those are newsworthy.

      Tbh I don’t think any of that is newsworthy, but ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

      • pyre@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        it’s not, and that is probably the case. still good to see because I’m sure it annoys him as the most insecure bitch baby in the world.

    • sircac@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      Well, there is probably some survival/confirmation bias on that statistics, those answers are the funny ones… in any case probably is not necessary a LLM to state such statements

      • ddh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 days ago

        If Grok’s still being trained on the Internet, it’ll be self-reinforcing too

  • sircac@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    4 days ago

    A LLM can “reveal” also that water ice melts into mapple syrup given the proper prompts, if people already can (consciously and not) lie proportionally to their biases I don’t understand why would somebody treat a LLM output as a fact…

    • Someone8765210932@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      4 days ago

      I agree, but in this case, I think it doesn’t really matter if it is true. Either way, it is hilarious. If it is false, it shows how shitty AI hallucination is and the bad state of AI.

      Should the authors who publish this mention how likely this is all just a hallucination? Sure, but I think Musk is such a big spreader of misinformation, he shouldn’t get any protection from it.

      Btw. Many people are saying that Elon Musk has (had?) a small PP and a botched PP surgery.

      • bstix@feddit.dk
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 days ago

        It’s usually possible to ask the AI for the sources. A proper journalist should always question the validity of their sources.

        Unfortunately, journalism is dead. This is just someone writing funny clickbait, but it’s quite ironic how they use AI to discredit AI.

        It makes sense for a journalist to discredit AI because AI took their jobs. This is just not the way to do it, because AI is also better at writing clickbait.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          If an AI isn’t in web search mode, it will just invent most likely answer, to the question of the source. Changes are very high that such sources don’t even exist.

          • bstix@feddit.dk
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            That’s why you ask for the sources, so you can check them.

            I think this kind of prompting is an important part of how to use it in any meaningful manner.

            You can also input your own sources and ask it to only use that. For instance by uploading a pdf of a law and ask it to figure out how to do something totally legal and then let it show where in the law it says so. You’ll obviously still need to check that the law actually says so and that it isn’t hallucinating.

  • Redditsux@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    5
    ·
    5 days ago

    Is this response by Grok real? How does it have awareness that its responses are being tweaked?

  • Zier@fedia.io
    link
    fedilink
    arrow-up
    31
    ·
    5 days ago

    Magic 8-ball, is xelon a bad person? [shakes ball] Answer: Signs point to yes.

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    5 days ago

    Musk paid to build (and is paying to maintain) an AI that calls him out on his bullshit and stubbornly refuses to be “corrected”. That is an oversimplification, but I fucking love it anyway.

    • 4am@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 days ago

      The reason that it does this is probably because it trains on tweets. Maybe other sources too.

      So keep tweeting about how Musk sucks and call him out on his bullshit (if you still use the Xitter) and Grok will, too. He can’t delete all of it!

  • Polderviking@feddit.nl
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 days ago

    As much as I’d love to take this at face value, people taking a run with what an AI told them is highly problematic.

  • ZeroOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    In other words a proprietary Response-Generator can be tweaked, how obvious.

    I am wondering what kind of person will take Grok’s word at face value.