…are some questions posted here so out of place that they feel like made up with AI? Or made to train AI? Don’t get me wrong, I love the genuine discussions here and I do engage, but some questions are just too… Forced? I’d love to give an example, but I won’t, because I don’t intend to out anybody. ^^

  • ameancow@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    It’s not a coincidence that the “peter explain the joke” and “what is this thing” and “What does this mean” kinds of subreddits and communities saw such a sharp spike in activity in the last couple years, and why so many of these posts are baffling and asking about the weirdest things.

    It means two things:

    1. Yes, of course they’re letting complex chat bots out to learn human behavior and even go as far as to let them make posts and create engagement in order to train themselves. This isn’t even secret, Reddit’s partnership with Google was announced years ago and they said this would happen. Of course it’s going to spread outside reddit as more and more companies discard ethics and truth to get an edge in manipulating populations.

    2. People are actually getting significantly less aware and less cognitive. Scrolling was the gateway drug, AI summaries, youtube shorts and AI-made clip farms and distracting news cycles are the black-tar heroin.

    • emotional_soup_88@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Love the analogy! I wouldn’t even mind the technology itself. It’s a really smart way of indexing and then outputting that which has been indexed according to the algorithm if your choice. It could be such a powerful tool in the right circumstances - hospitals, schools, libraries, dyslexic or deaf people, whathaveyou. But I’m so incredibly disappointed at how the general population bought into the “AI” jargon and discourse. It’s detrimental to critical thinking and to human ingenuity.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        The bigger concern here is that tech companies and governments can now use this to shape conversations, social attitudes and forge consent with the touch of a button.

        For example: if you read a big reddit post that has thousands of comments, and everyone in the comments are saying how ridiculous it is to think the sky isn’t green, you’re going to say “What the hell is everyone talking about? The sky is blue.” And then 40 people in comments pile on you and make fun of you, call you crazy or stupid, tell you that you’re either dumb or you don’t know the names of colors, tell you that you’re a member of some group you’ve never heard of before, send you links even to a site that explains talks about people like you who don’t know what color the sky is, etc. etc.

        Well, you’re going to probably feel really strange and maybe even go outside to look up and make sure. Even if you don’t actually change your mind and accept the sky is green, enough social pressure got you to question something you know deeply to be true. (Sorry if you’re colorblind, but the idea stands.)

        That’s something major that’s easy to confirm and it will still seed doubt in some segment of the population. What about less obvious things? What about political agendas, wars and ethnic cleansing? “race science” and other tools of fascism?

        Imagine if you see three top posts a week talking about how useful it is having cameras upload all your daily data to a mysterious new tech company “Wow, Ritnelap’s new full house surveillance system caught another burglar trying to break into my house last night!” “Yah, same! These systems are great, I don’t know how I lived without them before!”

        And on and on.