Aug. 26, 2025, 7:40 AM EDT
By Angela Yang, Laura Jarrett and Fallon Gallagher

[this is a truly scary incident, which shows the incredible dangers of AI without guardrails.]

    • otacon239@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      2 months ago

      The difference between a cure and a poison is the dose. LLMs are no different. If it’s your gut reaction to go to an LLM with a critical thinking challenge first, you’ve already lost. Semantic mirror is a great description. It’s similar to writing information you already know down as notes. You’re giving your brain a new way to review and interpret the information. If you weren’t capable of solving the problem traditionally, but just with more time, I’d have to imagine it’s unlikely the LLM will bridge that gap.

    • krunklom@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      It’s also become one of the few ways left to access knowledge online.

      Not TRUSTWORTHY knowledge, but more like: here is what a thing may be called and a very shaky baseline you can then validate with actual research now that you know what the thing you’re looking for may actually be called.

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    I can’t get ChatGPT to even touch on anything political or sexual. But this works? Fuck me.