I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • NoSpotOfGround@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    11
    ·
    21 hours ago

    What are some good reasons why AI is bad?

    There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:

    1. Bias and unfair decisions

    AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.

    2. Lack of transparency

    Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).

    3. Privacy risks

    AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.

    4. Job displacement

    Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.

    5. Misinformation and deepfakes

    AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.

    6. Weaponization

    AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.

    7. Overreliance and loss of human skills

    As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.

    8. Concentration of power

    Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.

    9. Alignment and control risks

    Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.

    10. Environmental impact

    Training large AI models consumes significant energy and resources, contributing to carbon emissions.


    If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.

    Were you looking for this kind of reply? If you can’t express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it’s wrong, just that it might not be justified/objective.)

    • Armok_the_bunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      21 hours ago

      Please, for the love of god, tell me you didn’t write that post with AI, because it really looks like that was written with AI.

      • NoSpotOfGround@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        11
        ·
        21 hours ago

        Except the first phrase and last paragraph, it was AI. Honestly, it feels like OP is taunting us with such a vague question. We don’t even know why they dislike AI.

        I’m not an AI lover. It has its place and it’s a genuine step forward. Less than what most proponents think it’s worth, more than what detractors do.

        I only use it myself for documentation on the framework I program in, and it’s reasonably good for that, letting me extract more info quicker than reading through it. Otherwise haven’t used it much.

        • enchantedgoldapple@sopuli.xyzOP
          link
          fedilink
          arrow-up
          7
          ·
          21 hours ago

          My question was genuine. I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

          All that being said, it is not easy for me to communicate these points clearly to someone the way I have experienced it. It’s like the case for informing people about privacy; casual users aren’t inherently aware of the consequences of using this tool and consider it a godsend. It will be difficult for them to convince that the tool they cherish to use so much is not that great after all, thus I am asking here what the beat approach should be.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            21 hours ago

            I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

            Isn’t that exactly the answer you are looking for?

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              20 hours ago

              The “environmental destruction” angle is likely to cause trouble because it’s objectively debatable, and often presented in overblown or deceptive ways.

        • athatet@lemmy.zip
          link
          fedilink
          arrow-up
          6
          ·
          21 hours ago

          “Good catch! I did make that up. I haven’t been able to parse your framework documentation yet”

    • AmidFuror@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      21 hours ago

      You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        20 hours ago

        I haven’t tested it, but I saw an article a little while back that you can add “don’t use emdashes” to ChatGPT’s custom instructions and it’ll leave them out from the beginning.

        It’s kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it’s an easy fix.