AI is overhyped and unreliable -Goldman Sachs
https://www.404media.co/goldman-sachs-ai-is-overhyped-wildly-expensive-and-unreliable/
“Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks”
We know, you guys tried using the buzz around it to push down wages. You either got what you wanted and flipped tune, or realized you fell for another tech bro middle-manning unsolicited solutions into already working systems.
It’s weird to me that people on Lemmy are so anti ML. If you aren’t impressed, you haven’t used it enough. “Oh it’s not 100% perfect,” well yeah who cares? You should partner it with a human to supervise it anyway. 1 human can supervise many ML partners
In terms of practical commercial uses, these highly human in the loop systems are about where it is and there are practical applications and products build off of it. I think what was sold though is a much more of either a replacement of people or a significant jump in functionality.
For example, there are products that will give you an AI summary of a structured or fairly uniform document like a generic press release, but there’s not really a good replacement for something to read backgrounds on 50 different companies and figure out which one you should invest in without a human basically doing all of that work themselves anyway just to check the work of the AI. The latter is what is being sold to make the enormous cost of hosting and training AI worth it.
I was fully on board until, like, a year ago. But the more I used it, the more obviously it came undone.
I initially felt like it could really help with programming. And it looked like it, too - when you fed it toy problems where you don’t really care about how the solution looks, as long as it’s somewhat OK. But once you start giving it constraints that stem from a real project, it just stops being useful. It ignores constraints (use this library, do not make additional queries, …), and when you point out its mistake and ask it to to better it goes “oh, sorry! Here, let me do the same thing again, with the same error!”.
If you’re working in a less common language, it even dreams up non-existing syntax.
Even the one thing it should be good at - plain old language - it sucks ass at. It’s become so easy to spot LLM garbage, just due to its style.
Worse, asking it to proofread a text for spelling and grammar mistakes, but to explicitly do not change the wording or style, there’s about a 50/50 chance it will either
- change your wording or style, or
- point out errors that are not even in the original text in the first place!
I could honestly go on and on, but what it boils down to is: it is able to string together words that make it sound like it knows what it is doing, but it is just that, a facade. And it looks like for more and more people, the spell is finally breaking.
@coffee_with_cream@sh.itjust.works @technology@LemmyWorld@mastodon.world I don’t hate it but do think it’s overhyped by wallstreet’s usual infinite growth assumption
Hey! This god damned steam engine keeps using up all our fucking water and coal. This shit is so inefficient and overhyped
It’s like how steam powered cars were developed, but by the time they engineered out all the disadvantages like start to bring the car up to temperature half an hour before driving, the gasoline powered car was there leaving the steam car is the dust.
Not to mention the experiments with steam powered aircraft.
yeah might as well wait for LLLLLMs
Marketing will give it a better name.
Did you know that Boeing first named their passenger jet the 700 series. but marketing found that 707 sounded much better. That’s why we now have the famous 747 and so on.
surprised it didn’t end up in a razor blade race where you fly the Airbus 900000000000009
Remember that time the dot com bubble burst and that was the end of internet commerce? Crazy people thought they could buy and sell goods and services over the internet. Glad we live in saner times now.
I remember saying a year ago when everybody was talking about the AI revolution: The AI revolution already happened. We’ve seen what it can do, and it won’t expand much more.
Most people were shocked by that statement because it seemed like AI was just getting started. But here we are, a year later, and I still think it’s true.
It’ll expand but it will take 5-10 years. Just like Web 1.0 and 2.0.
Not with the current tech. It can go faster, have more detailed output, maybe consume less too, but there seems to be a ceiling on what LLM and their derivative can do. There has been no improvement in that regard, and people that look into it are pretty confident that it won’t happen at this point.
I think it all depends on how good our tools to detect AI generated content become. If it is not distinguishable, then the internet is probably about to be flooded by AI generated content which in turn means AI is going to be trained more and more with AI content, degrading the model in the process.
AI development is indeed a series of S-curves and we’re currently nearing the peak of the curve. It’s going to be some time before the new S begins.
Those people were talking about the kind of AI we see in sci-fi, not the spellchecker-on-steroids we have today. There used to be a distinction, but marketing has muddied those waters. The sci-fi variety has been rebranded “AGI” so I guess the rest of that talk would go right along with it - the ‘AGI singularity’ and such.
All still theoretically possible, but I imagine climate will take us out or we’ll find some clever new way to make ourselves extinct before real AI …or AGI… becomes a thing.
Given AI’s energy needs, it’s already helping to take us out.
The AI revolution already happened. We’ve seen what it can do, and it won’t expand much more.
That’s like seeing a basic electronic calculator in the 60s and saying that computing won’t expand much more. Full-AI isn’t here yet, but it’s coming, and it will far exceed everything that we have right now.
That’s like seeing a basic electronic calculator in the 60s and saying that computing won’t expand much more.
“Who would ever need more than 640K of RAM?” -Bill Gates
Oh, I’m not saying that there won’t one day come a better technology that can do a lot more. What I’m saying is that the present technology will never do much more than it is already doing. This is not an issue of refining the technology for more applications. It’s a matter of completely developing a new type of technology.
In areas of generative text, summarizing articles and books, as well as writing short portions of code in order to assist humans, creating simple fan art, and meaningless images like avatars, and those stock photos at the top of articles, Perhaps creating short animations, Improving pattern recognition of things like speech and facial recognition… In all of these areas, AI was very rapidly revolutionary.
Generative AI will not become capable of doing things that it’s not already doing. Most of what it’s replacing are just worse computer programs. Some new technology will undoubtedly be revolutionary in the way that computers were a completely new revolution on top of basic function calculators. People are developing quantum computers, and mapping the precise functions of brain cells. If you want, you can download a completely mapped actual nematode brain right now. You can buy brain cells online, even human brain cells, and put them into computers. Maybe they can even run Doom. I have no idea what the next computing revolution will be capable of, but this one has mostly run its course. It has given us some very incredible tools in a very narrow scope, and those tools will continue to improve incrementally, but there will be no additional revolution.
That’s the thing though, that’s not comparable, and misses the point entirely. “AI” in this context and the conversations regarding it in the current day is specifically talking about LLMs. They will not improve to the point of general intelligence as that is not how they work. Hallucinations are inevitable with the current architectures and methods, and they lack a inherent understanding of concepts in general. It’s the same reason they can’t do math or logic problems that aren’t common in the training set. It’s not intelligence. Modern computers are built on the same principals and architectures as those calculators were, just iterated upon extensively. No such leap is possible using large language models. They are entirely reliant on a finite pool of data to try to mimic most effectively, they are not learning or understanding concepts the way “Full-AI” would need to to actually be reliable or able to generate new ideas.
it’s super weird that people think LLMs are so fundamentally different from neural networks, the underlying technology. neural network architectures are constantly improving, and LLMs are just a product of a ton of research and an emergence after the discovery of the transformer architecture. what LLMs have shown us is that we’re definitely on the right track using neural networks to solve a wide range of problems classified as “AI”
I think the main problem is applying LLM outside the domain of “complete this sentence”. It’s fine for what it is, and trained on huge datasets it obviously appears impressive, but it doesn’t know if it’s right or wrong, and evaluation metrics are different. In most traditional applications of neural networks, you have datasets with right and wrong answers, that’s not how these are trained, as there is no “right” answer to “tell me a joke.” So the training has to be based on what would likely fill in the blank. This could be an actual joke, a bad joke, a completely different topic, there’s no difference in the training data. The biases, incorrect answers, all the faults of this massive dataset are inherent in the model, and there’s no fixing that. They are fundamentally different in their application and evaluation (this extends to training) methods from other neural networks that are actually effective at what they do, like image processing and identification. The scope of what they’re trying to do with a finite dataset is not realistic and entirely unconstrained, as compared to more “traditional” neural networks, which are very narrow in scope exactly because of this issue.
Full-AI isn’t here yet, but it’s coming, and it will far exceed everything that we have right now.
go back to school, hopefully your next statement won’t sound as dumb.
Sure.
GPT4 is not that. Neither will GPT5 be that. They are language models that marketing is calling AI. They have a very specific use case, and it’s not something that can replace any work/workers that requires any level of traceability or accountability. It’s just “the thing the machine said”.
Marketing latched onto “AI” because blockchain and cloud and algorithmic had gotten stale and media and CEOs went nuts. Samsung is now producing an “AI” vacuum that adjusts suction between hardwood and carpet. That’s not new technology. That’s not even a new way of doing that technology. It’s just jumping on the bandwagon.
Marketing latched onto “AI” because blockchain and cloud and algorithmic had gotten stale and media and CEOs went nuts.
Notably, this also coincided with the first higher interest rate environment in the broader economy in over a decade.
Finally, the suits are catching up
Came here to say, we read last week that the industry spent $600bn on GPUs, they need that investment returned and we’re getting AI whether it’s useful or not… But that’s also written in the article.
NO KIDDING YOU WORLD-DESTROYING MONEYHUNGRY COCKROACHES
This is a start. It will be better.
Heartbreaking: The Worst Person You Know Just Made A Great Point
AI is dangerous…
The singularity is near…
- Define singularity in this context
- Define near in this context
- Define AI in this context
Goldman Sachs is overhyped and unreliable.
Be that as it may, I don’t think they’re incorrect in their statement here.
I do find the similarities between the function of AI and the function of a corporation to be quite interesting…
It’s all the bullshitting going on in both.
Absolutely true, but the morons (willful and no) will take this as additional proof that it’s altogether useless and a net negative.
There’s not a lot I agree with GS on, but this is on the list.
I concur.
Yeah… It’s machine learning with a hype team.
There are some great applications, but they are very narrow
We taught linear algebra to talk real pretty.
Oh your a dirty eigenvector arnt you! I’m going to transpose you so hard they won’t know you from a probability matrix!
Wow, I hate Goldman Sachs, but I think they’re on to something here…