• GreenBeard@lemmy.ca
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    5 hours ago

    Absolutely rude. If you’re using AI to make a point for you, you’ve already admitted you don’t know enough about what you’re talking about to be having a opinion in the first place, let alone be worth discussing an issue with.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      2 hours ago

      I’ve had these interactions with the head of my IT department. I asked to procure a license for jfrog artifactory. He literally copy/pasted a ChatGPT response to me that began like this:

      Here’s a breakdown of how JFrog Artifactory compares to using GitHub, NPM, or other language-specific package mangers (like Pypi)…

      1. Purpose and Functionality

      2. Workflow & Developer Experience

      3. Security and Compliance

      When to use JFrog

      It came with a bunch of theoretical risks that are completely resolved by the simple ability of just not being a complete fucking moron.

      It was really frustrating that I tried to talk with my IT leader, and instead found a proxy for ChatGPT.

      After that, he created a group chat with him, I, and my colleagues in security. He proceeded to paste ChatGPT output outlining bullshit risks and theories, with the implicit expectation that I rhetorically address each of them via my own response. I’d explain things like,

      “[well if you read the fucking request yourself, you’d know that] we aren’t planning to use the software that way, so the concern isn’t relevant. Even if we were though, those problems are easily addressable via …”

      In some cases, I even had to explain that the problems he’s raising are already problems faced in the current ecosystem. Completely unrelated to the software I’m talking about… ChatGPT just straight up implying that an architectural problem is a software risk.

      I’d reply, and I swear to god he’d just give ChatGPT my text and paste the reply from ChatGPT back to me.

      I lost a lot of respect for him. Why the fuck would you do that?

      • Panthenetrunner@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        32 minutes ago

        I’m fast coming to the conclusion that AI can indeed replace jobs. The thing is that the only job it can actually replace is that of a lazy middle manager. AI is great at responding to email if A:) you don’t know what your talking about or B:) you don’t respect the other person enough to waste the time formulating an actual response. AI in my experience is only really good at faking that there’s someone on the other end. The fact that there’s an entire management class it can convenienceingly impersonate is a pretty searing indictment as far as I’m concerned.

  • jason@discuss.online
    link
    fedilink
    English
    arrow-up
    19
    ·
    5 hours ago

    My company hired a consulting firm to help with a transition period. The consulting firm sent my boss an email that outlined the plans for what we should do and how they are going to help. Without directly giving it away, the email was clearly AI output, and my boss instantly terminated their contract. We aren’t exactly anti-AI, but to the point of the post, it’s just so rude… and my boss is pretty fuckin cool.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 hours ago

      Especially rude if you want to charge money for it. If your boss wanted an AI answer, they would have asked an AI. You don’t need an expensive consulting company for that.

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    7 hours ago

    Totally agree. When someone sends me some AI slop about a topic I have knowledge – which I’ve had this happen to me recently during a debug session – and asks me to read it, I think to myself “this person does not respect me, otherwise they wouldn’t be telling me to read stuff that may or may not be accurate that they themselves never read.” It’s like a new, worse version of “let me google that for you” but without the sarcasm, and without the results actually being helpful.

  • MrPnut@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    6
    ·
    11 hours ago

    Whenever someone at work says “ChatGPT says this” or “Claude says this” or “I asked Gemini and…” whatever they say after that point is just static and I never take them seriously as a person again.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      As a source it’s rude. As a piece of unreliable help of the kind “we both don’t know the syntax of that programming language, let’s ask Ollama how to draw such and such a shape in it” it’s kinda fine.

    • pHr34kY@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      7 hours ago

      I appreciate the honesty when they say it’s an AI response and not genuine knowledge.

      When I tell someone “an LLM told me that…” It’s usually followed by “Let’s see if there’s any truth to it.” An AI response should always be treated as a suggestion, not an answer.

      Hell, Google’s AI still doesn’t know which day the F1 GP is on this week. It was wrong by a whole week a while back. Now it’s only off by a day.

      • mcv@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 hours ago

        An AI response should always be treated as a suggestion, not an answer

        Exactly. An AI response can be a great way to get started on a topic you know little about, but it’s never a definitive answer. You have to verify whether it’s actually true. Whether it works. Never trust it blindly.

        • Panthenetrunner@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 minutes ago

          I feel like a big barrier is people anthropomorphizing the AI. It’s not “ChatGPT generated this” it’s “ChatGPT said this”. I don’t necessarily blame people for it, machine that speaks to you short circuits something in people’s brains and it’s not like we’ve got better language to talk about it. It’s just that… people treat it as an opinion, not as software output. And so long as that’s how people handle it, I just don’t know if a “healthy” use of the technology is possible.

  • Bibip@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    11 hours ago

    hi friends i hope you’re well.

    i worked a laborious job and experienced a phenomenon i refer to as “parasitic thought:” it is where someone will provide to you all of the information that a person would require to reach the correct conclusion, and then stare at you. they want you to crunch the info for them.

    i feel like one of those parasites in my agent interactions. i know i COULD think, but you can do it too, lil buddy. go on. do it for me.

    i don’t know about “reasonable” or “ethical” or “polite,” but in my experience: if someone just regurgitates some clank clank slop slop, it reads as hostile. “i can’t be bothered to communicate with you, here, read this wall of gpt-vomit”

    my instinct is to copy and paste, “LLM agent of my choice, what’s this person trying to say to me?” and then skim the ai synthesized summary of the ai composed body text generated from some idiot’s faint echoes of thought.

    in the words of your highschool biology teacher, the human is the powerhouse of the agentic loop. in my unimportant opinion, responsible use of genai agents means that the output should be indistinguishable, if not better, than something you wrote by hand.

    there are privacy implications. linguistic assessment can be used to identify you. from a privacy perspective, the internet would be preferable if everyone fed their carefully formed thoughts to an LLM and said “make this look like chatgpt 3 wrote it.”

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    1
    ·
    18 hours ago

    Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me

    Not even “I asked chatgpt and it said”, they just dump it in the chat @ me

    Sometimes I’ll write up a 2~3 paragraph thought on something.

    And then I’ll get a ping 15min later and go take a look at what someone responded with annnd… it starts with “Here’s a quick summary of what (pixxelkick) said! <AI slop that misquotes me and just gets it wrong>”

    I find this horribly rude tbh, because:

    1. If I wanted to be AI summarized, I would do that myself damnit
    2. You just clogged up the chat with garbage
    3. like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
    4. It’s just kind of… dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up

    I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its… just coming across as wasting everyones time.

    • Vlyn@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 minutes ago

      Oof, I don’t even get what they are trying to accomplish there. Maybe they had some kind of social training that told them “Summarize and reply what you understood first to show that you listened and avoid miscommunication, then add your response.” and their brain short circuited and started to think a ChatGPT summarization is the same.

      I’d get pretty hostile at work if someone started to do that…

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      13 hours ago

      This is sad, really. People are fed the lie that AI is objective, and apparently they think that they will get the objective summary of what you said if they run it through a chatbot.

      And the more people interact with chatbots, the harder they find it to interact outside of the chatbots. So they might feel even more uncomfortable with asking you to summarize yourself. So they go back to the chatbot. It’s a self-perpetuating cycle.

      • ErmahgherdDavid@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        58 minutes ago

        Exactly. To your point, AI output is probabilistically the average opinion of everyone on the internet so it shares the common biases of the general public. Even with a bit of RLHF to “balance out” the models. Also it probably doesn’t help to anthropomorphise them. They don’t have opinions, they just autocomplete based on prior input

        It seems pretty clear after a few years of people getting AI psychosis that LLMs are an addictive psychological hazard

  • lemmydividebyzero@reddthat.comOP
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    18 hours ago

    I already think that it’s insulting when people accomplish/do/implement/… something and want to informs the others and do that by generating a 1-2 pages long wall of text via LLM that is then copy-pasted into an email…

    Like… Can’t you just write down the 5 or 10 most important points? Are we not worth the time to do so? Do we have to find the most relevant information ourselves in that text???

    • zqwzzle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      1
      ·
      18 hours ago

      You’re supposed to feed it into your own prompt to summarize it duh. /s

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      edit-2
      16 hours ago

      I sometimes use LLMs to help me with brevity or clarity. But the input is my own words and the output is almost always edited so that I sound like me because sometimes, while the output is serviceable, it’s just… bad and uninspired.

      Plus it’s like “this doesn’t sound professional”. Well, fuck you, it sounds how I want to sound.

        • Zos_Kia@jlai.lu
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          The best way to learn to write is to write and have someone critique you. That someone can be an AI it doesn’t change anything about the process, as long as the initial input is your own best effort and the final result is your own edit based on the feedback you received.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          2
          ·
          13 hours ago

          Who I rely on it? I accept suggestions when the good truth, even if the source of the suggestions is a slop generator. I accept what it is right about and reject what I don’t. And why not? It costs nothing.

          And, at 52, I write the way I write. I enjoy the process, I enjoy playing with language. I enjoy the juxtaposition of literary flourishes with a crude fuck you thrown in as punctuation and counterpoint to what might otherwise seem inaccessible or deliberately obtuse.

          But do you know what I’ve found? I can be a little overly self-indulgent. For example, you didn’t want all this, you just wanted to throw your glib little “lrn2write” and garner a few upvotes from the vehement AI haters and give yourself a self-righteous pat on the back.

          Sometimes I need another perspective to suggest restraint. As you can see, this, like 98% of my writing, is mine alone, else I’d’ve taken what would undoubtedly be good advice and held back on the more acerbic bits, and made sure I wasn’t posting some knee-jerk defensive self-indulgent 100% man-made slop.

          But here we are.

          • tomalley8342@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            10
            ·
            11 hours ago

            It costs nothing.

            Except for an opportunity to practice getting better at the thing you recognize is an issue.

            And, at 52, I write the way I write.

            Although I guess you’ve already given up on the getting better part.

          • ToTheGraveMyLove@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            14
            ·
            12 hours ago

            How do you know if its a good suggestion if you don’t know what you’re doing? Think for yourself, stop trusting slop.

            And, at 52, I write the way I write.

            Apparently not. Now you write the way a slop generator tells you to write.

  • Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    18 hours ago

    It’s more about post size for me. If ya post a few sentences that clearly and concisely communicate a point, I don’t really care if they’re crafted or generated. If ya post a wall of text, I wanna know ya put the kind of effort in that made its length necessary if I’m gonna put in the effort to read it.

    • Vlyn@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 minutes ago

      Especially ChatGPT is awful for that. You ask it something and it always spews a whole page of content.

      At work we use Claude, which does produce better output and also calls out your bullshit. There it’s actually helping quite a bit (software development), but of course you have to understand what you are changing and clean things up.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    2
    arrow-down
    20
    ·
    18 hours ago

    Uh huh. And at the same time, I’m frequently told “it’s the deception that we hate! Don’t claim you did something if an AI actually did it!”

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        arrow-down
        15
        ·
        15 hours ago

        I’m pointing out that people find excuses to hate on AI regardless of what you do with it. Makes it pointless to compromise or otherwise try to satisfy them.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          15
          ·
          edit-2
          13 hours ago

          It does multiple bad things.

          Saying “aha, you used to say you hated deception, but now you hate another bad thing” is not a gotcha.

          I dislike many bad things, but you seem locked into defending AI at all costs. Please go back to Reddit.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            arrow-down
            9
            ·
            12 hours ago

            I seem to recall that the Fediverse was keen to bring in Reddit refugees. Only ones that agree with the existing preferred opinions, I guess?

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            arrow-down
            5
            ·
            12 hours ago

            You don’t have to use it. Other people who do find value in it use it.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                6
                ·
                11 hours ago

                OP provided no context whatsoever.

                Over the years there have been so many conversations I’ve been in online where someone asked something where the answer was trivially found with Google or some other search engine, but the conversation was interesting so I would Google it and provide the answer as part of my response. Is that blockworthy too?

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    9 minutes ago

    Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.

    A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.

    spoiler

    Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote.

    A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.