- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Aug. 26, 2025, 7:40 AM EDT
By Angela Yang, Laura Jarrett and Fallon Gallagher
[this is a truly scary incident, which shows the incredible dangers of AI without guardrails.]
Aug. 26, 2025, 7:40 AM EDT
By Angela Yang, Laura Jarrett and Fallon Gallagher
[this is a truly scary incident, which shows the incredible dangers of AI without guardrails.]
deleted by creator
The difference between a cure and a poison is the dose. LLMs are no different. If it’s your gut reaction to go to an LLM with a critical thinking challenge first, you’ve already lost. Semantic mirror is a great description. It’s similar to writing information you already know down as notes. You’re giving your brain a new way to review and interpret the information. If you weren’t capable of solving the problem traditionally, but just with more time, I’d have to imagine it’s unlikely the LLM will bridge that gap.
Some shit is just straight up poison though.
It’s also become one of the few ways left to access knowledge online.
Not TRUSTWORTHY knowledge, but more like: here is what a thing may be called and a very shaky baseline you can then validate with actual research now that you know what the thing you’re looking for may actually be called.