“There’s no way to get there without a breakthrough,” OpenAI CEO Sam Altman said, arguing that AI will soon need even more energy.
“There’s no way to get there without a breakthrough,” OpenAI CEO Sam Altman said, arguing that AI will soon need even more energy.
So AI can’t exist without stealing people’s content and it can’t exist without using too much energy. Why does it exist then?
Because the shareholders need more growth. They might create Ultron along the way, but think of the profits, man!
There’s no way these chatbots are capable of evolving into Ultron. That’s like saying a toaster is capable of nuclear fusion.
Thats if you set the toaster to anything above 3
It’s the further research being done on top of the breakthrough tech enabling the chat bots applications people are worried about. It’s basically big tech’s mission now to build Ultron, and they aren’t slowing down.
What research? These bots aren’t that complicated beyond an optimisation algorithm. Regardless of the tasks you give it, it can’t evolve beyond what it is.
I think we’ve got a bit before we have to worry about another major jump in AI and way longer for an Ultron. The ones we have now are effectively parsers for google or other existing data. I personally still don’t see how we feel like we can get away with calling that AI.
Any AI that actually creates something ‘new’ that I’ve seen still requires a tremendous amount of oversight, tweaking and guidance to produce useful results. To me, they still feel like very fancy search engines.
The models get more efficient and smaller very fast if you look just a year back. I bet we’ll run some small LLMs locally on our phones (I don’t really believe in the other form factors yet) sooner as we believe. I’d say prior 2030.
I can already locally host a pretty decent ai chatbot on my old M1 Macbook (llama v2 7B) which writes at the same speed I can read, its probably already possible with the top of the line phones.
Lol, “old M1 laptop” 3 to 4 years is not old, damn!
(I have running macbookpro5,3 (mid 2009) on Arch, lol)
But nice to hear that M1 (an thus theoretically even the iPad, if you are not talking about M1 pro / M1 max) can already run llamma v2 7B.
Have you tried the mistralAI already, should be a bit more powerful and a bit more efficient iirc. And it is Apache 2.0 licensed.
https://mistral.ai/news/announcing-mistral-7b/
Huh, nice. I got the macbook air secondhand so I thought it was older. Thanks for the suggestion, I’ll try mistralAI next, perhaps on my phone as a test.
Because it’s a miracle technology. Both of those things are also engineering problems - ones that have been massively mitigated already. You can run models almost as good as gpt3.5 on a phone, and individuals are pushing the limits on how efficiently we can train every week
It’s not just making a chatbot or a new tool for art - it’s also protein folding, coming up with unexpected materials, and being another pair of eyes that will assist a person do anything.
They literally promise the fountain of youth, autonomous robots, better materials, better batteries, better everything. It’s a path for our species to break our limits, and become more.
The downside is we don’t know how to handle it. We’re making a mess of it, but it’s not like we could stop… The AI alignment problem is dwarfed by the corporation alignment problem
🙄 iTS nOt stEAliNg, iTS coPYiNg
By your definition everything is stealing content. Nearly everything in human history is derivative of others work.