AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

    • DominusOfMegadeus@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      11
      ·
      29 days ago

      Because all other information on credit cards (or anything else) on the internet available to people eager to learn is 100% accurate, all the time?

  • seahag@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    2
    ·
    edit-2
    29 days ago

    AI has uses in the medical, scientific, and disabled communities. I’ve seen it helping blind people with shopping, with Google glasses or whatever reporting what they’ve picked up and describing it to them. It can also identify/predict cancer tissue early.

    Generative AI is peak laziness and the death of human creativity. Using AI for companionship has a nasty effect on mental health.

    AI should have only ever been an assistant in medical/scientific research in my opinion, simply because it’s so damaging to the environment, economy, and society.

    • iByteABit@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      29 days ago

      It can also identify/predict cancer tissue early.

      Do you mean an LLM or a machine learning model specifically trained for this?

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        29 days ago

        Different case, obviously, but I remember reading about an AI which can identify pending-heart-attack from x-rays … and nobody could figure-out what the hell it was judging from…

        THAT is brilliant.

        Specialized to the degree that it is trustworthy.

        I’d be surprised if humans could possibly compete against a properly-done set-of-AI’s, which worked through, in the correct order, all possible diagnostic-reasoning.

        Democratizing accurate-diagnosis would be THE medical-revolution the world needs, now.

        _ /\ _

  • sicktriple@lemmy.ml
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    29 days ago

    The technology itself is novel and cool. Its the complete and utter meltdown of all tech companies into brainless hype machines that is harmful, which is course, is a function of capitalist incentive and the need for the tech industry to come out with some new paradigm shifting innovation every decade. A normal, healthy society would have been able to leverage machine learning and LLM technology where its most useful, like parsing large amounts of data, or running a local instance on your computer to ask a few questions, etc. We wouldn’t see LLMs in every text editor, pencilcase and pair on sneakers but these snake oil salesmen who run the US economy are absolutely desperate for a new paradigm shift so they can keep making exponentially more money.

    The thing is, we don’t need to build these datacenters siphoning comically evil amounts of energy from the grid and making personal compute a thing of the past. Average everyday person doesn’t need cloud compute, they can run a local 4b parameter (very, very small) model on their laptop or phone if they need to ask chatgpt to make them a workout routine or to ask them who won the 1918 world series. But these fucking cretins don’t care, that’s not the point, they are in this because it’s a golden ticket to growth city and once they cash their check they don’t give one hot fuck about the human-spirit-stealing-machine they built.

    TLDR: our society is broken and that’s why we keep getting the shittiest, lowest-common-denominator version of everything. everything has to suck by definition because that’s the only version that the system we built will allow.

  • logos@sh.itjust.works
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    29 days ago

    I have a friend at work that does a lot of video. He films weddings, music videos etc. and is making a pilot for Netflix. He uses AI to go through all his footage and tag it according to content. E.g. if he needs a clip of birds, he can just search ‘birds’ and it will pull up all relevant footage. Incredibly useful.

  • Lumidaub@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    29 days ago

    If we’re strictly talking about LLMs: Certain accessibility services - MAYBE. Writing closed captions / transcription for the most part requires little “human” touch. If we ASSUME that AI will be able to it reliably one day - because it really can’t yet - that’s one thing that would benefit society.

    Image descriptions is another thing I might see done by AI one day but that still requires an understanding of what’s actually important about the image.

    • lepinkainen@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      29 days ago

      I built a system that translates subtitles from English to my native language and it beats cheap-ass “official” translations 9/10

      It even gets colloquial terms and phrases right, adapting to the correct song for example - something a human translator working for minimum pay usually won’t bother

    • Paragone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      29 days ago

      Please go watch the yt video of Bernie Sanders discussing politics/society/civilization with Claude.ai

      That may blow your mind…

      It’s … not quite as limited as you, or I, had been believing…

      _ /\ _

  • MerrySkeptic@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    29 days ago

    I’m a therapist. I use HIPAA compliant AI to generate my (editable) case notes for my sessions now. Not only is it a huge time saver to simply edit a generated note as opposed to making one from scratch, but in many cases it takes more detailed notes, including quotes from clients.

    I have heard of other therapists and medical doctors also using AI to help with diagnosing.

    The danger is when therapistsdon’t review the content to check for accuracy. Because occasionally it will generate something not really reflective of what the therapist might have been doing, or it might lack detail that the therapist might have otherwise inclused. But more often the stuff it comes up with is surprisingly accurate.And editing is even easier when you can just tell the AI something like, “include more details about how the client noticed their pattern of putting their own feelings last,” and it just does what you asked. You don’t necessarily have to edit manually, though you can.

      • MerrySkeptic@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        ·
        29 days ago

        Yes basically, but since it is HIPAA compliant the recording is automatically destroyed when the note is saved. Also no protected recordings are used to teach the AI. The therapist can also choose from a number of different case note formats that might focus on different things

          • SuperUserDO@piefed.ca
            link
            fedilink
            English
            arrow-up
            6
            ·
            29 days ago

            People conflate security with risk mitigation. It’s not secure in the way that you can confirm the data has been deleted. The risk however is mitigated due to vendor attestations reinforced by contracts.

            • Helix 🧬@feddit.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              29 days ago

              Yep, so you can’t actually know if the recording is destroyed, it’s just contractually required to be destroyed. Big difference in my book.

              Wished these sensitive audios would be processed locally and never leave the therapist’s network instead.

          • MerrySkeptic@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            29 days ago

            I can’t know for certain, as I’m not on the product side of things. But I do know that HIPAA standards are very rigorous and if it were discovered that they were intentionally misleading therapists and clients then it would invite a class action lawsuit that would be insanely large.

            I do ask for and document my clients’ consent, though, so if anyone is not comfortable with it that’s fine. I just write the note the old fashioned way. Most are fine but a few have said they don’t want to and it’s not a big deal.

          • lepinkainen@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            29 days ago

            A HIPAA violation is a death sentence to a company, along with massive fines.

            There’s no incentive for them to fuck around

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    29 days ago

    For every small benefit, there are disastrous mistakes. We shouldn’t discuss one without the other:

    https://tech.co/news/list-ai-failures-mistakes-errors

    March 2026

    • Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she’s never visited

    February 2026

    • Health advice given by AI chatbots is frequently wrong, says new study

    January 2026

    • Study reveals that fixing AI mistakes takes up to 40% of the time that it saves

    • An AI tool used by ICE to identify applicants with previous law enforcement experience falsely flagged applicants with no such experience, leading to the placement of unqualified recruits in field offices.

    December 2025

    • AI mistakes clarinet for gun at Florida school

    November 2025

    • Google Antigravity deletes entire content of user’s computer drive

    • Report finds AI hallucinations in 490 court filings from the past six months

    October 2025

    • Teenager handcuffed after AI mistakes Doritos packet for gun

    • Lawyer submits AI-assisted court filing with fake citations

    • Man follows ChatGPT advice over stopping eating salt, develops rare condition. The man was hospitalized, sectioned, and eventually treated for psychosis. He tried to escape the hospital within 24 hours of being admitted.

    • ChatGPT-5 jailbroken with 24 hours of release

    July 2025

    • AI Coding app deletes entire company database

    • McDonald’s AI chatbot error exposes data of 64 million job applicants

    • AI program is tasked with running a small shop, goes insane, claims to be human

    • Apple Intelligence falsely presents BBC headline

    … and it just keeps going.

  • shellington@piefed.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    29 days ago

    I agree there is a lot of annoying hype. However i also agree there are some specific use cases where it can be helpful.

    I for one find it handy some times when i am writing bash scripts to do things on my system. I obviously check them before running but it does save time.

    Although i do recommend running models locally if possible as it is obviously preferable from a privacy and cost standpoint.

  • FoundFootFootage78@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    28 days ago
    • Searching a large dataset with a vague search criteria.
    • Real-time feedback when studying a foreign language (since accuracy is less important than quantity).
    • Apparently in medicine they’re using generative AI for something meaningful, but I’m not entirely convinced it is actually generative AI and I’d need to do more research.
    • Sometimes it can help in learning to program and in sanity-checking code security.
    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      29 days ago

      If you’re thinking of protein design it is, just with a sequence instead of natural language text. Although it’s not just a straight LLM, there’s some kind of physics awareness engineered in as well.

  • rossman@lemmy.zip
    link
    fedilink
    arrow-up
    6
    ·
    29 days ago

    rubberducking for those with social anxiety. Also small friction to get surface level answers that normally took digging from multiple sources.

    it’s a study monster that initially wiped chegg, duolingo, sparknotes etc. The double edge is that people forgot how to take notes, learn fundamentals to handle complex problems.

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    28 days ago

    translation is pretty good.

    they want to make ai npcs on games, which could be awesome if we can ever reduce the system requirements for running it.

    • sangeteria@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      28 days ago

      There’s that one silly vampire game which uses AI NPCs, I think it’s kind of fun looking from people I saw play it

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    29 days ago

    Anything that’s fuzzy and impossible to automate with traditional algorithms, but that also has a reasonably high tolerance for error. It just makes up stuff a good portion of the time, you see.

    However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

    Watch out, personal finance is not one of those things.

  • damnthefilibuster@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    29 days ago

    I was sitting in a restaurant the other day and staring at the menu. It was Italian and none of the things made sense. Too wordy and not clear what was meat and what was fancy cheese. The waiter was utterly useless - too busy to help and when present, not answering my questions about what would be a good simple pasta in white sauce.

    I took a photo and asked Claude what’s a good white sauce pasta which would be like Alfredo.

    It found two options I hadn’t even looked at. AI is good at sorting through complexity. But I don’t just mean AI as in LLMs. It needs a lot more tools and knowledge to be useful. So what you need is a smart system which may or may not have AI as a component.

    • iguessimlemming@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      29 days ago

      Going to Italy to use an LLM to find pasta Alfredo is… well, there’s your use case. Pure and unfettered ignorance. I will take my down votes now, thank you. I don’t care. Ugh. Just ugh.

  • lattrommi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    29 days ago

    I went to my local neighborhood association because I wanted to improve where I live. I was elected president of the association a couple months later, mostly because no one else wanted to do it. It’s a fairly poor part of a medium sized city in the U.S.

    I’ve been using AI (running locally on a computer I built that isn’t connected to the internet, to reduce harm to the environment) to apply for grants, plan events and help me run the meetings.

    It is actually perfect for the job. Saying that as someone who thinks AI is mostly hype and useless for the majority of its current common uses these days. I feed it the text from city grant applications or ask it to make a poster to increase attendance and it’s saved me a lot of time. Without it, being someone diagnosed ADHD, I would not have been able to do most of the stuff I have accomplished so far.