The main thing is the title of the post, the body of the post is an addition and clarification to the question.
Article for example: Google’s AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges – https://futurism.com/artificial-intelligence/google-ai-robot-body-suicide-lawsuit
My thoughts, not quite related to the question:
Well, how are you going to get through your last year when AI could get out of hand in 2027?
What is happening in the world reminds me of a novel - I have no mouth, but I must scream. Have you read this novel?


So, my own take – which is not necessarily shared by everyone — is that current AI systems, the LLM things like ChatGPT or Claude or whatever, are going to have a pretty hard time running amok to a huge degree, due to technical limitations. One big one: they have a lot of static memory, but their “mutable memory” is not very large — just what lives in the context menu.
And to some extent, the specific way in which hallucinations show up are an artifact of the fact that they are LLMs. My expectation is that an artificial general intelligence that can reason like a human likely will not be simply an LLM (though it might incorporate an LLM).
However, you can say, I think, that at some point, we will have artificial general intelligences that work at a human level. And then…yeah, the fact that whatever reasoning process they use, they will probably make errors, just as humans do. And that could be a problem, just as it is when humans do. In the case of an advanced AI that is much more capable than humans, how to control it and make it do things that we would want is a problem, and not an easy one. Maybe a problem that we can’t actually solve.
No, but I did play the adventure game based on it in ScummVM.
Yep. Skynet won’t be an LLM.
It’ll be a hybrid Mamba model.
Also everyone should read I Have No Mouth. Harlan Ellison’s depiction of an evil AI is … prescient