That it’s controlled by a few is only a problem if you use it… my issue with it starts before that.
My biggest gripe with AI is the same problem I have with anything crypto: It’s out of control power consumption relative to the problem it solves or purpose it serves. And by extension the fact nobody with any kind of real political power is addressing this.
Here we are using recycled bags, banning straws, putting explosive refrigerant in fridges and using led lights in everything, all in the name of the environment, while at the same time in some datacenter they are burning kwh’s by the bucket loads generating pictures of cats in space suits.
Here we are using recycled bags, banning straws, putting explosive refrigerant in fridges and using led lights in everything
lol, sucker. none of that does shit and industry was already destroying the planet just fine before ai came along.
Dare I assume you are aware we have “industry” because we consume?
yes. we are cancer. i live on as little as possible but i don’t delude myself into thinking my actions have any effect on the whole.
i spent nearly 20 years not using paper towels until i realized how pointless it was. now i throw my trash out the window. we’re all fucked. if we want to change things, there’s only one tool that will fix it. until people realize that, i really don’t fucking care any more.
now i throw my trash out the window.
You don’t believe not using paper towels was a net positive so now you choose to create and by extention live in a pigsty? I’m not following.
My biggest gripe with AI is the same problem I have with anything crypto crypto: It’s out of control power consumption relative to the problem it solves or purpose it serves.
Don’t thrown all crypto under the bus. Only bitcoin and other proof of work protocols are power hungry. 2nd and 3rd generation crypto use mostly proof of stake and ZKrollups for security. Much more energy efficient.
Sure, but despite all the crypto bros assurances to the contrary, the only real-world applications for it is buying drugs, paying ransoms and getting scammed. Which means that any non-zero amount of energy is too much energy.
There are some use cases below, and none use proof of work.
https://hbr.org/2022/01/how-walmart-canada-uses-blockchain-to-solve-supply-chain-challenges
I’m aware of this, but it still mostly just something for people speculate on. Something people buy, sit on, and then hopefully sell with a profit.
Bitcoin was supposed to be a decentralized money alternative, but the amount of people actually, legitimately, buying things with crypto are highly negligible. And honestly even if it did serve it’s actual purpose, the cumulative power consumption would still be a point of debate.
And honestly even if it did serve it’s actual purpose, the cumulative power consumption would still be a point of debate.
Yeah, but at that point you’d have to consider it against how much power the traditional banking system uses.
Yes, most people buy, sit on, and then hopefully sell with a profit.
However, there are a large number of devs building useful things (supply chain, money transfer, digital identity). Most as good as, but not yet better than incumbent solutions.
My main challenge is the energy misconception. The cumulative power of the ethereum network runs on the energy equivalent of a single wind turbine.
Here we are using recycled bags, banning straws, putting explosive refrigerant in fridges and using led lights in everything, all in the name of the environment, while at the same time in some datacenter they are burning kwh’s by the bucket loads generating pictures of cats in space suits.
That’s, #1, fashion and not about environment, #2, fashion promoted because it’s cheaper for the industry.
And yes, power saved somewhere will just be spent elsewhere. Cheaper. Cause that means reduced demand for power (or grown not as fast as otherwise).
deleted by creator
That’s why we need the weights, right now! Before they figure out how to do this. It will happen, but at least we can prevent backsliding from what we have now.
COO > Return.
I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.
The system in place is “open weights” models. These AI companies don’t have a huge head start on the publicly available software, and if the value is there for a corporation, most any savvy solo engineer can slap together something similar.
Welcome to every technological advancement ever applied to the workforce
Truer words have never been said.
I don’t really agree that this is the biggest issue, for me the biggest issue is power consumption.
That is a big issue, but excessive power consumption isn’t intrinsic to AI. You can run a reasonably good AI on your home computer.
The AI companies don’t seem concerned about the diminishing returns, though, and will happily spend 1000% more power to gain that last 10% better intelligence. In a competitive market why wouldn’t they, when power is so cheap.
Large power consumption only happens because someone is willing to dump lots of capital into it so they can own it.
Oh you’re right, let me just tally up all the days where that isn’t the case…
carry the 2…
don’t forget weekends and holidays…
Oh! It’s every single day. It’s just an always and forever problem. Neat.
It’s nothing of the sort. If nobody had the capital to scale it through more power, then the research would be more focused on making it efficient.
AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want (“fuck the copyright, especially fuck the natural resources”) who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.
I don’t have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.
Well I’m on board for fuck intellectual property. If openai doesn’t publish the weights then all their datacenter get visited by the killdozer
AI has a vibrant open source scene and is definitely not owned by a few people.
A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.
So long as there are big players releasing open weights models, which is true for the foreseeable future, I don’t think this is a big problem. Once those weights are released, they’re free forever, and anyone can fine-tune based on them, or use them to bootstrap new models by distillation or synthetic RL data generation.
Technological development and the future of our civilization is in control of a handful of idiots.
The biggest problem with AI is that it’s the brut force solution to complex problems.
Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.
Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.
It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.
It’s moronic. Currently, decision makers don’t really understand what to do with AI and how it will realistically evolve in the coming 10-20 years. So it’s getting pushed even into environments with 0-error policies, leading to horrible results and any time savings are completely annihilated by the ensuing error corrections and general troubleshooting. But maybe the latter will just gradually be dropped and customers will be told to just “deal with it,” in the true spirit of enshittification.
Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can’t blame a hammer for some evil person using it to hit someone in the head, and there is more to ‘hammers’ than just assaulting people.
Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with.
There is a possibility for something interesting and cool there (I think about how Chuck Pearson’s eccojams is just like short loops of random songs repeated in different ways, but it’s an absolutely revolutionary album) even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.
Yeah, Eno actually has made a variety of albums and art installations using generative simple AI for musical decisions, although I don’t think he does any advanced programming himself. That’s why it’s really odd to see comments in an article that imply he is really uninformed about AI…he was pioneering generative music 20-30 years ago.
I’ve come to realize that there is a huge amount of misinformation about AI these days, and the issue is compounded by there being lots of clumsy, bad early AI works in various art fields, web journalism etc. I’m trying to cut back on discussing AI for these reasons, although as an AI enthusiast, it’s hard to keep quiet about it sometimes.
Eno is more a traditional algorist than “AI” (by which people generally mean neural networks)
I could see him using neural networks to generate and intentionally pick and loop short bits with weird anomalies or glitchy sounds. Thats the route I’d like AI in music to go, so maybe that’s what I’m reading in, but it fits Eno’s vibe and philosophy.
AI as a tool not to replace other forms of music, but doing things like training it on contrasting music genres or self made bits or otherwise creatively breaking and reconstructing the artwork.
John Cage was all about ‘stochastic’ music - composing based on what he divined from the I Ching. There are people who have been kicking around ideas like this for longer than the AI bubble has been around - the big problem will be digging out the good stuff when the people typing “generate a three hour vapor wave playlist” can upload ten videos a day…
Sure. I worked in the game industry and sometimes AI can mean ‘pick a random number if X occurs’ or something equally simple, so I’m just used to the term used a few different ways.
Totally fair
Idk if it’s the biggest problem, but it’s probably top three.
Other problems could include:
- Power usage
- Adding noise to our communication channels
- AGI fears if you buy that (I don’t personally)
Dead Internet theory has never been a bigger threat. I believe that’s the number one danger - endless quantities of advertising and spam shoved down our throats from every possible direction.
We’re pretty close to it, most videos on YouTube and websites that exist are purely just for some advertiser to pay that person for a review or recommendation
Power usage
I’m generally a huge eco guy but on power usage particularly I view this largely as a government failure. We have had to incredible energy resources that the government has chosen not to implement or effectively dismantled.
It reminds me a lot of how Recycling has been pushed so hard into the general public instead of and government laws on plastic usage and waste disposal.
It’s always easier to wave your hands and blame “society” than the is to hold the actual wealthy and powerful accountable.
Could also put up:
- Massive collections of people are exploited in order to train various AI systems.
- Machine learning apps that create text or images from prompts are supposed to be supplementary but businesses are actively trying to replace their workers with this software.
- Machine learning image generation currently has diminishing returns for training as we pump exponentially more content into them.
- Machine learning text and image generated content self-poisons their generater’s sample pool, greatly diminishing the ability for these systems to learn from real world content.
There’s actually a much longer list if we expand to talking about other AI systems, like the robot systems we’re currently training to use in automatic warfare. There’s also the angle of these image and text generation systems being used for political manipulation and scams. There’s alot of terrible problems created from this tech.
Power usage probably won’t be a major issue; the main take-home message of the Deepseek brouhaha is that training and inference can be much more efficiently than we had thought (our estimates had been based on well-funded Western companies that didn’t have to bother with optimization).
AI spam is an annoyance, but it’s not really AI-specific but the continuation of a trend; the Internet was already drowning in human-created slop before LLMs came along. At some point, we will probably all have to rely on AI tools to filter it out. This isn’t something that can be unwound, any more than you can undo computers being able to play chess well.
No?
Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.
Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.
deleted by creator
But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.
We shouldn’t do anything ever because poors
This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.
I’ve downloaded several academic models and all commercial models and AI tools are based on all that public research.
I run AI models locally on my PC and you can too.
That is entirely true and one of my favorite things about it. I just wish there was a way to nurture more of that and less of the, “Hi, I’m Alvin and my job is to make your Fortune-500 company even more profitable…the key is to pay people less!” type of AI.
But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.
Why attack the technology if its the rich people you are against and not the technology itself.
It’s not even the people; it’s their actions. If we could figure out how to regulate its use so its profit-generation capacity doesn’t build on itself exponentially at the expense of the fair treatment of others and instead actively proliferate the models that help people, I’m all for it, for the record.
Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.
It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.
Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)
Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?
Not that it matters, of course! Just curious.
Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven’t actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.
The real innovation that isn’t commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drives the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It’s an absolute game changer and I’m surprised OpenAI has released their own MLA model yet.
While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can’t be accused of using copyrighted text to learn how to write, then AI shouldn’t either. Just my hot take that I know is controversial outside of academic circles.
It’s possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.
Though I don’t think it’s a good use of money personally, because the requirements are dropping all the time. We’re starting to see some very promising small models that use a fraction of those resources.
wrong. it’s that it’s not intelligent. if it’s not intelligent, nothing it says is of value. and it has no thoughts, feelings or intent. therefore it can’t be artistic. nothing it “makes” is of value either.
The biggest problem with AI is the damage it’s doing to human culture.
Not solving any of the stated goals at the same time.
It’s a diversion. Its purpose is to divert resources and attention from any real progress in computing.