• pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    1
    ·
    19 hours ago

    Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me

    Not even “I asked chatgpt and it said”, they just dump it in the chat @ me

    Sometimes I’ll write up a 2~3 paragraph thought on something.

    And then I’ll get a ping 15min later and go take a look at what someone responded with annnd… it starts with “Here’s a quick summary of what (pixxelkick) said! <AI slop that misquotes me and just gets it wrong>”

    I find this horribly rude tbh, because:

    1. If I wanted to be AI summarized, I would do that myself damnit
    2. You just clogged up the chat with garbage
    3. like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
    4. It’s just kind of… dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up

    I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its… just coming across as wasting everyones time.

    • Vlyn@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Oof, I don’t even get what they are trying to accomplish there. Maybe they had some kind of social training that told them “Summarize and reply what you understood first to show that you listened and avoid miscommunication, then add your response.” and their brain short circuited and started to think a ChatGPT summarization is the same.

      I’d get pretty hostile at work if someone started to do that…

    • doesit@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 hours ago

      I’d leave the appreciate the attempt out. You don’t.
      More importantly, would enquire if they use corporate or free AI. Second one is used for training and has no or low protection of (perhaps sensitive) corporate info/data.

      • nickiwest@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I think at some point it will come out that the corporate subscription is no different and the LLM companies have been scraping everything for training data.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      15 hours ago

      This is sad, really. People are fed the lie that AI is objective, and apparently they think that they will get the objective summary of what you said if they run it through a chatbot.

      And the more people interact with chatbots, the harder they find it to interact outside of the chatbots. So they might feel even more uncomfortable with asking you to summarize yourself. So they go back to the chatbot. It’s a self-perpetuating cycle.

      • ErmahgherdDavid@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 hours ago

        Exactly. To your point, AI output is probabilistically the average opinion of everyone on the internet so it shares the common biases of the general public. Even with a bit of RLHF to “balance out” the models. Also it probably doesn’t help to anthropomorphise them. They don’t have opinions, they just autocomplete based on prior input

        It seems pretty clear after a few years of people getting AI psychosis that LLMs are an addictive psychological hazard