The scenario begins with AI agents undergoing a “jump in capability”.
Might as well stop reading there. Another fluff piece about how useful and capable AI supposedly is, disguised as a doomsday scenario. I’m so sick of reading this bullshit. “Agentic AI” based on LLMs does not work reliably yet and very likely never will.
If you complain about bugs in traditional (deterministic) software, you ain’t seen nothing yet. A probabilistic system such as an LLM might or might not book the correct flight for you. It might give you the information you have asked for or it might delete your inbox instead.
As a consequence of a system being probabilistic, anything you do with it works or fails based on probabilities. This really is the dumbest timeline.
Might as well stop reading there. Another fluff piece about how useful and capable AI supposedly is, disguised as a doomsday scenario. I’m so sick of reading this bullshit. “Agentic AI” based on LLMs does not work reliably yet and very likely never will.
If you complain about bugs in traditional (deterministic) software, you ain’t seen nothing yet. A probabilistic system such as an LLM might or might not book the correct flight for you. It might give you the information you have asked for or it might delete your inbox instead.
As a consequence of a system being probabilistic, anything you do with it works or fails based on probabilities. This really is the dumbest timeline.
Not to mention agents not being immune to confabulation, what we’d call if human did it: “making shit up”.