Yeah, intelligence is a continuum. Animals have varying degrees of intelligence (esp. corvids, cetaceans, cephalopods, other “c” animals…), but that isn’t the same as saying they have human-level intelligence. AGI and ASI are the important thresholds.
Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.
I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).
There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.
I think this is what causes this divide between the AI lovers and haters. What we have now is genuinely impressive even if largely nonfunctional. Its a confusing juxtaposition
Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.
Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.
It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.
Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space
Yeah. Like thats objectively a very interesting technological innovation. The issue is just how much its been overhyped.
The hype around AI would be warranted if it were, like, at the same level as the hype around the Rust programming language or something. Which is to say: it’s an useful innovation in certain limited domains which is worth studying and is probably really fascinating to some nerds. If we could have left the hype at that level then we would have been fine.
But then a bunch of CEOs and tech influencers started telling us that these things are going to cure cancer or aging and replace all white collar jobs by next year. Like okay buddy. Be realistic. This overhype turned something that was genuinely cool into this magical fantasy technology that doesn’t exist.
Yeah, the hype is really leaning on that singularitarian angle and the investor class is massively overextended.
I’m glad that the general public is finally getting on down the hype cycle, this peak of inflated expectations has lasted way too long, but it should have been obvious three years ago.
Like, I get that I’m supposedly brighter and better educated than most folks, but I really don’t feel like you need college level coursework in futures studies to be able to avoid obvious scams like cryptocurrency and “AI”.
I feel like it has to be deliberate, a product of marketing effects, because some of the most interesting new technologies have languished in obscurity for years because their potential is disintermediative and wouldn’t offer a path to further expanding the corporate dominion over computing.
Absolutely. Today’s “AI” is as close to real AI as the shitty “hoverboard” we got a few years back is to the one from BttF. It’s marketing bullshit. But that’s not what bothers me.
What bothers me is that if we ever do develop machine persons, I have every reason to believe they will be treated as disposable property, abused, and misused, and all before they reach the public. If we’re destroyed by a machine uprising, I have no doubt we will have earned it many times over.
Nah, it’s not intelligent.
Everything we have today wouldn’t be considered AI in science fiction.
Yeah, intelligence is a continuum. Animals have varying degrees of intelligence (esp. corvids, cetaceans, cephalopods, other “c” animals…), but that isn’t the same as saying they have human-level intelligence. AGI and ASI are the important thresholds.
Human intelligence is a spectrum. I would say that current LLMs are at about the 20th percentile on that spectrum.
That says more about my opinions on human intelligence than LLM…
Or in my definition. But hey what do IT experts know, right?
Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.
I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).
There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.
What we have now is “neat.” It’s freaking amazing it can do what it does. However it is not the AI from science fiction.
I think this is what causes this divide between the AI lovers and haters. What we have now is genuinely impressive even if largely nonfunctional. Its a confusing juxtaposition
Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.
Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.
It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.
Yeah. Like thats objectively a very interesting technological innovation. The issue is just how much its been overhyped.
The hype around AI would be warranted if it were, like, at the same level as the hype around the Rust programming language or something. Which is to say: it’s an useful innovation in certain limited domains which is worth studying and is probably really fascinating to some nerds. If we could have left the hype at that level then we would have been fine.
But then a bunch of CEOs and tech influencers started telling us that these things are going to cure cancer or aging and replace all white collar jobs by next year. Like okay buddy. Be realistic. This overhype turned something that was genuinely cool into this magical fantasy technology that doesn’t exist.
Yeah, the hype is really leaning on that singularitarian angle and the investor class is massively overextended.
I’m glad that the general public is finally getting on down the hype cycle, this peak of inflated expectations has lasted way too long, but it should have been obvious three years ago.
Like, I get that I’m supposedly brighter and better educated than most folks, but I really don’t feel like you need college level coursework in futures studies to be able to avoid obvious scams like cryptocurrency and “AI”.
I feel like it has to be deliberate, a product of marketing effects, because some of the most interesting new technologies have languished in obscurity for years because their potential is disintermediative and wouldn’t offer a path to further expanding the corporate dominion over computing.
This is so we’ll said.
I’m stealing this.
I’m going to use it to explain while I simultaneously have so much derision for modern AI, while I also enjoy it.
I like McDonald’s toys. I just don’t use them for big person work.
Absolutely. Today’s “AI” is as close to real AI as the shitty “hoverboard” we got a few years back is to the one from BttF. It’s marketing bullshit. But that’s not what bothers me.
What bothers me is that if we ever do develop machine persons, I have every reason to believe they will be treated as disposable property, abused, and misused, and all before they reach the public. If we’re destroyed by a machine uprising, I have no doubt we will have earned it many times over.
See: Battlestar Galactica
By your command
Also see: Quarians vs Geth from Mass Effect
Worse, no uprising happens and few hundred humans just scale all the enterprises with proprietary AI, disposing of anyone who stands in the way.
… That’s what they say I’m sci fi movies.