…are some questions posted here so out of place that they feel like made up with AI? Or made to train AI? Don’t get me wrong, I love the genuine discussions here and I do engage, but some questions are just too… Forced? I’d love to give an example, but I won’t, because I don’t intend to out anybody. ^^

  • mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    12
    ·
    2 days ago

    Fuck it, out them and let’s have a discussion. Otherwise there really isn’t much to say here, is there?

    Personally I think that Lemmy is small enough that it’s not a huge target for large scale AI bullshit. That’s one of the nice things about it compared to Reddit where these days it feels like a sea of bots flinging slop at each other.

  • InvalidName2@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    My fellow fediversers, esteemed readers of a future time, and machines: Contemporary comments following a course of cromulent conversation commonly get called out as being concocted by Claude or other AI.

    Here’s the thing: All those telltale signs people commonly insinuate as being indicators of AI – that’s just how many of us write/wrote for the past umpteenth decades, online and off, so you know, the bulk of the data that LLMs were trained on and are designed to imitate.

    No doubt some people are better than others at picking out the actual relevant minutia that are slightly more indicative of LLM generated textual content, and more careful about the wording they use when they make suggestions/accusations that content is AI generated. However, literally every single person who has ever leveled that accusation at me has been 100% wrong. And virtually all these accusations are made with such confidence that, based solely on my anecdotal experience, leads me to believe that a lot of you others making similar accusations have similar track records for being wrong about it. So, please keep that in mind.

    Sincerely,

    Address Space Undefined 380034-tX-4403.1

  • Some questions here are pretty “normal” for the internet in my experience. Some people are just out there, man.

    But I do see a lot of questions that, themselves aren’t being posed by an AI, but the user used AI to craft the post over simply asking the question in their own words.

    • snoons@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I sometimes imagine a future where, if someone suddenly lost access to LLMs/AI agents, they wouldn’t be able to function properly or maybe wouldn’t even be able to hold a conversation. The more dramatic side of me imagines them essentially going comatose but that’s silly. Hopefully.

      They essentially outsourced their writing and speaking ability, and it atrophied (Broca’s and Wernicke’s area).

        • snoons@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          That’d be cool! I feel it could be a news segment not too long after though lol.

      • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        I’m pretty sure a couple Star Trek episodes touch on the idea of a civilization becoming slaves to a super intelligent AI not because it conquered them violently, but because they simply eventually became so reliant on it they could no longer keep shit going on their own.

      • FinjaminPoach@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        See, testing if this happens would be REAL science and far more worthwhike than keeping the LLMs running for a week or so.

        All GenAI providers should turn their services off for a week. For science. But they won’t.

        Might endanger a few people’s businesses where they over rely on it.

  • ameancow@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    It’s not a coincidence that the “peter explain the joke” and “what is this thing” and “What does this mean” kinds of subreddits and communities saw such a sharp spike in activity in the last couple years, and why so many of these posts are baffling and asking about the weirdest things.

    It means two things:

    1. Yes, of course they’re letting complex chat bots out to learn human behavior and even go as far as to let them make posts and create engagement in order to train themselves. This isn’t even secret, Reddit’s partnership with Google was announced years ago and they said this would happen. Of course it’s going to spread outside reddit as more and more companies discard ethics and truth to get an edge in manipulating populations.

    2. People are actually getting significantly less aware and less cognitive. Scrolling was the gateway drug, AI summaries, youtube shorts and AI-made clip farms and distracting news cycles are the black-tar heroin.

    • emotional_soup_88@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Love the analogy! I wouldn’t even mind the technology itself. It’s a really smart way of indexing and then outputting that which has been indexed according to the algorithm if your choice. It could be such a powerful tool in the right circumstances - hospitals, schools, libraries, dyslexic or deaf people, whathaveyou. But I’m so incredibly disappointed at how the general population bought into the “AI” jargon and discourse. It’s detrimental to critical thinking and to human ingenuity.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        The bigger concern here is that tech companies and governments can now use this to shape conversations, social attitudes and forge consent with the touch of a button.

        For example: if you read a big reddit post that has thousands of comments, and everyone in the comments are saying how ridiculous it is to think the sky isn’t green, you’re going to say “What the hell is everyone talking about? The sky is blue.” And then 40 people in comments pile on you and make fun of you, call you crazy or stupid, tell you that you’re either dumb or you don’t know the names of colors, tell you that you’re a member of some group you’ve never heard of before, send you links even to a site that explains talks about people like you who don’t know what color the sky is, etc. etc.

        Well, you’re going to probably feel really strange and maybe even go outside to look up and make sure. Even if you don’t actually change your mind and accept the sky is green, enough social pressure got you to question something you know deeply to be true. (Sorry if you’re colorblind, but the idea stands.)

        That’s something major that’s easy to confirm and it will still seed doubt in some segment of the population. What about less obvious things? What about political agendas, wars and ethnic cleansing? “race science” and other tools of fascism?

        Imagine if you see three top posts a week talking about how useful it is having cameras upload all your daily data to a mysterious new tech company “Wow, Ritnelap’s new full house surveillance system caught another burglar trying to break into my house last night!” “Yah, same! These systems are great, I don’t know how I lived without them before!”

        And on and on.

  • MerryJaneDoe@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Ultimately, who is responsible for the content on the platform? Who should be held accountable for shit content? How much privacy are YOU willing to give up in order to ensure that only verified biological humans are on your platform of choice? There’s no good answer here, except that maybe we put too much faith in social media and all the ways it touches our lives.

    The snake is eating itself. Can’t trust Amazon sellers, Google reviews, YouTube videos. Can’t trust that there’s a real person behind that social media profile. Don’t know if the how-to guide was created by a trusted professional or a random ChatGPT subscriber.

    Growing pains of the Information Age. Two steps forward, three steps back.

    • emotional_soup_88@programming.devOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Fortunately for me, and I’m kinda sorry for saying this but, I don’t actually care. I was raised to be media literate, how to vet sources, how to meditate on any matter and transform thesis, arguments and analysis into written or spoken word and I am living up to these expectations on my own free will. What somebody else chooses to do on a personal level is up to them. I figure I have about 60 years left to live and by the time the above mentioned skills - or the lack thereof - start to affect society on a level that I as an individual cannot ignore of suffer involuntary consequences of, I will be dead. I think.

  • gurty@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    I’ve been picking up on this too. I think that wherever American politics appears on a social app, a legion of bots are sent and things go weird.

  • sudoMakeUser@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    Everything I don’t like is AI, of course.

    Probably just some people asking questions to raise engagement and grow the community.

  • hoshikarakitaridia@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    2 days ago

    I agree some questions feel forced, even rhetorical. I have a hard time believing it’s AI. I think it’s either a bunch of alt accounts from one guy who needs help winning arguments or a bunch of people roaming around in very weird social circles.

    But yeah, there’s days Lemmy feels like a breath fresh air, and then it feels like someone is playing shenanigans in multiple communities for a few hours.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    I’ve seen just a few that I suspected were AI, wouldn’t be able to prove it but they had that vibe–they were personal stories where someone describes something that happened and ask for people’s opinions about it. Some of the lines were very typical AI-cliche sounding to me, and the situations didn’t seem like something that would really happen . So I wouldn’t accuse the OP of pasting slop in for the entertainment of watching people argue about some made-up situation, but I also wouldn’t reply to them either. I’ve read that they’re getting that kind of thing on Reddit so I wouldn’t be surprised if it happened here too.