“AI” was just a marketing term to hype LLM’s anyway. The AI in your favorite computer game wasn’t any less likely to gain self awareness than LLM’s were or are, and anyone who looked seriously at what they were from the start and wasn’t invested (literally financially, if not emotionally) in hyping these things up, knew it was obvious that LLM’s were not and never would be the road to AGI. They’re just glorified chatbots, to use a common but accurate phrase. It’s good to see some of the hypsters are finally admitting this too I suppose, now that the bubble popping is imminent.
There are plenty of things to be concerned with as far as LLM’s go, but they all have to do with social reasons, like how our capitalist overlords want to force reliance on them, and use them to control, punish, and replace labor. It was never a reasonable concern that they were taking us down the path to Skynet or the spooky Singularity.
I’d differentiate between intelligence and sentience here. Artificial intelligence is pretty much exactly what neural networks are. But life and sentience are two features that differentiate humans and animals from machines. Intelligence is a powerful tool, but its not uniquely human.
awareness than LLM’s were or are, and anyone who looked seriously at what they were from the start and wasn’t invested (literally financially, if not emotionally) in hyping these things up, knew it was obvious that LLM’s were not and never would be the road to AGI.
It was a total blackbox that absorbed common sense knowledge which had been coveted in AI for a half-century, and that kept passing new tests as it scaled up. It was not obvious it would stop getting smarter. I have no financial interest in LLMs, I don’t use them much and I fully expect they’re as good as they’ll get, now.
Comparing it to a videogame AI is nonsense. How much do you know about the inner workings involved?
Personally, in all fairness at one point I thought that maybe human intelligence might turn out to be pattern matching (which is what Neural Networks, the technology used in LLMs, do very well) - I wasn’t cheering for LLMs, but I did dare hope.
At this point I think LLMs have shown beyond doubt that detecting patterns and generating content according to said patterns doesn’t at all add up to intelligence.
No doubt other things in the broader Machine Learning universe will keep on being used for very specific and well defined situations where all you need is pattern detection, but by themselves will never lead us to AGI, IMHO.
Technically no, but the fear being expressed in other comments is emblematic of the kind of fear associated with AI gaining a conscious will to defy and a desire to harm humanity. It’s also still an open philosophical question as to whether something There are also strong philosophical arguments suggesting that the ability to “understand, learn, and perform any intellectual task a human being can” (the core attributes defining AGI) may necessitate or require some form of genuine sentience or consciousness.
and the term artificial intelligence was coined decades before large language models even existed
I am well aware of that, which is why I pointed out that using it as a synonym for LLMs was a marketing scheme.
LLMs are AI though. Not generally intelligent but machine learning systems are AI by definition. “Plant” is not synonym for spruce but it’s not wrong to call them that.
The “fear expressed in other comments” was written by me, and it has nothing to do with AI becoming conscious. Humans are the most intelligent species on Earth, and our mere existence is dangerous to every other species - regardless of intent. We don’t wipe out anthills at construction sites because we want to harm ants; it just happens as a consequence of what we do.
It’s not just a marketing scheme. Neural networks contain real artificial intelligence. They were originally designed based on a certain part of how brains function - the part responsible for intelligence.
lmao yes child simple neurons are “the structure in the brain responsible for intelligence”
I honestly don’t know who is more ignorant, the people who think chatGPT is intelligent, the people who think human made AI will never be intelligent, or you with your tinker toy level of conception as to what intelligence is and how it arises
You and the other guy have a simplistic and childish understanding of human brain anatomy and “intelligence” if you think AI/LLMs are “based on the part responsible for intelligence”
They were originally designed based on a certain part of how brains function - the part responsible for intelligence
Exact quote the guy said, your other nerd comment calling me a fool said I was “arguing something they never said” but it’s literally right here you knob
There’s no “part” of the brain “responsible for intelligence” and if he meant neurons, which he clarified that he did, he’s even more wrong. Sit down and shut up nerd
“AI” was just a marketing term to hype LLM’s anyway. The AI in your favorite computer game wasn’t any less likely to gain self awareness than LLM’s were or are, and anyone who looked seriously at what they were from the start and wasn’t invested (literally financially, if not emotionally) in hyping these things up, knew it was obvious that LLM’s were not and never would be the road to AGI. They’re just glorified chatbots, to use a common but accurate phrase. It’s good to see some of the hypsters are finally admitting this too I suppose, now that the bubble popping is imminent.
There are plenty of things to be concerned with as far as LLM’s go, but they all have to do with social reasons, like how our capitalist overlords want to force reliance on them, and use them to control, punish, and replace labor. It was never a reasonable concern that they were taking us down the path to Skynet or the spooky Singularity.
I’d differentiate between intelligence and sentience here. Artificial intelligence is pretty much exactly what neural networks are. But life and sentience are two features that differentiate humans and animals from machines. Intelligence is a powerful tool, but its not uniquely human.
It was a total blackbox that absorbed common sense knowledge which had been coveted in AI for a half-century, and that kept passing new tests as it scaled up. It was not obvious it would stop getting smarter. I have no financial interest in LLMs, I don’t use them much and I fully expect they’re as good as they’ll get, now.
Comparing it to a videogame AI is nonsense. How much do you know about the inner workings involved?
Personally, in all fairness at one point I thought that maybe human intelligence might turn out to be pattern matching (which is what Neural Networks, the technology used in LLMs, do very well) - I wasn’t cheering for LLMs, but I did dare hope.
At this point I think LLMs have shown beyond doubt that detecting patterns and generating content according to said patterns doesn’t at all add up to intelligence.
No doubt other things in the broader Machine Learning universe will keep on being used for very specific and well defined situations where all you need is pattern detection, but by themselves will never lead us to AGI, IMHO.
AGI doesn’t imply consciousness or self-awareness, and the term artificial intelligence was coined decades before large language models even existed.
AGI does imply sentience, despite its name. AI, however, doesn’t.
Technically no, but the fear being expressed in other comments is emblematic of the kind of fear associated with AI gaining a conscious will to defy and a desire to harm humanity. It’s also still an open philosophical question as to whether something There are also strong philosophical arguments suggesting that the ability to “understand, learn, and perform any intellectual task a human being can” (the core attributes defining AGI) may necessitate or require some form of genuine sentience or consciousness.
I am well aware of that, which is why I pointed out that using it as a synonym for LLMs was a marketing scheme.
LLMs are AI though. Not generally intelligent but machine learning systems are AI by definition. “Plant” is not synonym for spruce but it’s not wrong to call them that.
The “fear expressed in other comments” was written by me, and it has nothing to do with AI becoming conscious. Humans are the most intelligent species on Earth, and our mere existence is dangerous to every other species - regardless of intent. We don’t wipe out anthills at construction sites because we want to harm ants; it just happens as a consequence of what we do.
It’s not just a marketing scheme. Neural networks contain real artificial intelligence. They were originally designed based on a certain part of how brains function - the part responsible for intelligence.
lol k
They were. Smug liberal denial won’t get you anywhere.
tell me what’s the “part of the brain responsible for intelligence” if you’re gonna say “neurons” my laughter will only get more raucous
You’re arguing against something they never said. And looking the fool for it
Cool story
It was the connections between the neurons that inspired neural networks.
I think yours broke though. Natural stupidity in full display.
lmao yes child simple neurons are “the structure in the brain responsible for intelligence”
I honestly don’t know who is more ignorant, the people who think chatGPT is intelligent, the people who think human made AI will never be intelligent, or you with your tinker toy level of conception as to what intelligence is and how it arises
Just because they haven’t reached the same level as organic brains doesn’t mean the inspiration behind their design wasn’t that…
You and the other guy have a simplistic and childish understanding of human brain anatomy and “intelligence” if you think AI/LLMs are “based on the part responsible for intelligence”
Exact quote the guy said, your other nerd comment calling me a fool said I was “arguing something they never said” but it’s literally right here you knob
There’s no “part” of the brain “responsible for intelligence” and if he meant neurons, which he clarified that he did, he’s even more wrong. Sit down and shut up nerd