What are your thoughts on Generative Machine Learning models? Do you like them? Why? What future do you see for this technology?
What about non-generative uses for these neural networks? Do you know of any field that could use such pattern recognition technology?
I want to get a feel for what are the general thoughts of Lemmy Users on this technology.
It’s bullshit. It’s inauthentic. It can be useful for chewing through data, but even then the output can’t be trusted. The only people I’ve met who are absolutely thrilled by it are my bosses, who are two of the most frustrating, stupid, pig-headed, petty people I’ve ever met. I wish it would go away. I’m quitting my job next week, taking a big paycut and barely being able to pay the bills, specifically because those two people are unbearable. They also insist that I use AI as much as possible.
It’s a tool with some interesting capabilities. It’s very much in a hype phase right now, but legitimate uses are also emerging. Automatically generating subtitles is one good example of that. We also don’t know what the plateau for this tech will be. Right now there are a lot of advancements happening at rapid pace, and it’s hard to say how far people can push this tech before we start hitting diminishing returns.
For non generative uses, using neural networks to look for cancer tumors is a great use case https://pmc.ncbi.nlm.nih.gov/articles/PMC9904903/
Another use case is using neural nets to monitor infrastructure the way China is doing with their high speed rail network https://interestingengineering.com/transportation/china-now-using-ai-to-manage-worlds-largest-high-speed-railway-system
DeepSeek R1 appears to be good at analyzing code and suggesting potential optimizations, so it’s possible that these tools could work as profilers https://simonwillison.net/2025/Jan/27/llamacpp-pr/
I do think it’s likely that LLMs will become a part of more complex systems using different techniques in complimentary ways. For example, neurosymbolics seems like a very promising approach. It uses deep neural nets to parse and classify noisy input data, and then uses a symbolic logic engine to operate on the classified data internally. This addresses a key limitation of LLMs which is the ability to do reasoning in a reliable way and to explain how it arrives at a solution.
Personally, I generally feel positively about this tech and I think it will have a lot of interesting uses down the road.
I think that they’re neat, they’re development is fascinating to me, and that they have their utility. But I am sick of executive and marketing types sloppily cramming them into every corner of every service just so they can tell their shareholders that it’s “powered by AI”. So far, I’ll use a page or app dedicated to chatting with the llm, or I’ve also found that GitHub copilot in vscode is pretty nifty sometimes for things like quickly generating docs that I can then just proofread and edit. But in most other applications and websites I don’t use them at all or I’m forced to and the experience is worse. Recently, I’ve been having to work in Microsoft’s power platform a bit for a client (help me). Almost every page in the entire platform has an AI chatbot on the side that’s supposed to do some of the work around you. Don’t use it. It fucks up your shit. Ask it to do something, it will change your flow or whatever you’re working with with the wrong syntax that won’t even compile 9/10 times, with no opportunity to undo, and the remaining 1/10 is logic errors. Ask it questions about the platform, not only will it not know anything, it will literally accuse you of not speaking English.
TL;DR I think they’re neat and useful IF they’re used responsibility and implemented well. Otherwise they are a nuisance excuse to use a buzzword at best or dangerous at worst
The pushback against genAI’s mostly reactionary moral panic with (stupid|misinformation|truth stretching) talking points , such’s :
- AI art being inherently “plagiarising”
- AI using as much energy’s crypto , the AI = crypto mindset in general
- AI art “having no soul” , .*
- “Peops use AI to do «BAD THING» , therefour AI ISZ THE DEVILLLL ‼‼‼”
- .*
Any legitimate criticisms sadly drowned out by this bollocks , can’t trust anti AI peops to actually criticise the tech . Am bitter
AI art being inherently “plagiarising”
Yes it is, simply due to the nature of the “training”/“learning” process, which is learning in name alone. If you know how this mathematical process works you know the machine’s definition of success is how well it’s output matches the data it was trained with. The machine is effectively trying to encrypt it’s data base on it’s nodes. I would recommend you inform yourself on how the “training” process actually works, down to the mathematical level.
AI using as much energy’s crypto , the AI = crypto mindset in general
AI is often push by the same people who pushed NFTs and whatnot, so this is somewhat understandable. And yes, AI consumes a lot of energy and water. Maybe not as much as crypto, but still, not something we can afford to use for mindless entertainment in our current climate catastrophe.
AI art “having no soul”
Yup. AI “art” works by finding pixel patterns that repeat with a given token. Due to it’s nature, it can only repeat patterns which it identified in it’s training data. Now, we have all heard of the saying “An image in worth a thousand words”. This saying is quite the understatement. For one to describe an image down to the last detail, such detail that someone who never saw the image could perfectly replicate it, one how need more than a thousand words, as evidenced by computer image files, since these are basically what was just described. The training data never has enough detail to describe the whole image in such detail and therefore it is incapable of doing anything too specific.
Art is very personal, the more of yourself you put into a piece, the more unique and “soulful” it will be. The more of the work you delegate to the machine, the less of yourself you can put into the piece, and if 100% of the image generation was made by the machine, which is in turn simply calculating an average image that matches the prompt, then nothing of you is in the piece. It is nothing more than the maths that created it.
Simple text descriptions do not give the human meaningful control over the final piece, and that is why pretty much any artist worth their tittle is not using it.
Also, the irony that we are automating the arts, something which people enjoy doing, instead of the soul degrading jobs nobody wants to do, should not be lost on us.
“Peops use AI to do «BAD THING» , therefour AI ISZ THE DEVILLLL ‼‼‼”
It is true that AI is being used in horrible was that will take sometime to adapt, it is simply that the negative usages of AI have more visibility than the positive usages. As a matter of fact, this node network technology was already in use in many fields before the Chat-GPT induced AI hype train.
can’t trust anti AI peops to actually criticise the tech
Correct. It is well known that those who stem to financially benefit from the success of AI are more than willing to lie about it’s true capabilities.
https://nitter.privacydev.net/ai_curio/status/1564878372185989120
AI consumes a lot of energy and water. Maybe not as much as crypto, but still, not something we can afford to use for mindless entertainment in our current climate catastrophe.
https://tb.opnxng.com/reachartwork/742453285743706112/re-ai-water-usage https://blog.giovanh.com/blog/2024/08/14/is-ai-eating-all-the-energy-part-1-of-2/
Activities like eating beef use more energy than AI models , therefour they contribute more to climate change (in whatever likely negligible ways compared to corporate entities like Shell , who we shoud focus on wen reversing climate change) than use of AI models . But there’s no widespread moral panic over individual’s beef consumption (closest you’ll get are some kinds of vegans , but they stay fringe) compared to AI use
Crypto’s exceptional bcus it’s literally wasting energy with very limited use cases
Peops shouldn’t be calling for death of an entire medium based on some thing subjective like its outputs being “soulless”
Simple text descriptions do not give the human meaningful control over the final piece, and that is why pretty much any artist worth their tittle is not using it.
What about AI-assisted art , which has more “human meaningful control” than simple (txt2img prompt|inpaint)ing then ?
Also, the irony that we are automating the arts, something which people enjoy doing, instead of the soul degrading jobs nobody wants to do, should not be lost on us.
Wasz initially onef those peops that didn’t think art was automatable . Turns out I was wrong . Also not every artist enjoys every part of the process . Any one who ever does art in any serious capacity knows that , am sure some would find spending hours upon hours tweaking (pose|composition|colours|lighting placement|.*) before getting to the “fun parts” “soul degrading” . While AI art models doesn’t automate everything , it can automate those with varying success
can’t trust anti AI peops to actually criticise the tech
Correct. It is well known that those who stem to financially benefit from the success of AI are more than willing to lie about it’s true capabilities.
While I don’t think you’re wrong , that’s not what I said
Wen I say can’t trust anti AI art peops to criticise AI art tech , I’m including you . If you use AI “art” in scarequotes , you’re part of the problem , most your criticisms are based on (things easily debunked|misinformation|subjectivities|.*)
Using AI like Deepseek is a lot easier than shifting through 50 search results, if the question is for a relatively new technology though then it usually doesn’t work
No joke, it will probably kill us all… The Doomsday Clock is citing Fascism, Nazis, Pandemics, Global Warming, Nuclear War, and AI as the harbingers of our collective extinction… The only thing I would add, is that AI itself will likely speed-run and coordinate these other world-ending disasters… It’s both Humanity’s greatest invention, and also our assured doom.
“AI” is humanity’s greatest invention…? wtf lol
Capitalism will ruin any good opportunities with said technology. Much like every technology that preceeded it.
Most GenAI was trained on material they had no right to train on (including plenty of mine). So I’m doing my small part, and serving known AI agents an infinite maze of garbage. They can fuck right off.
Now, if we’re talking about real AI, that isn’t just a server park of disguised markov chains in a trenchcoat, neural networks that weren’t trained on stolen data, that’s a whole different story.
I like to think somewhere researchers are working on actual AI and the AI has already decided that it doesn’t want to read bullshit on the internet
No. It is an unneeded waste of resources spent by anti-human perverts.
The actual purpose is to parse surveillance data for the capitalist class.
Death. Kill 'em all. Butlerian jihad now. Anybody trying to give machines even the illusion of thought is a traitor to humanity. I know this might sound hyperbolic; it’s not. I am not joking rn. I mean it.
You sound like spiritualist empires in Stellaris
Whatever that means, it sounds based (I’ve been meaning to play Stellaris for ages but haven’t really gotten around to it since the one game I played back in like 2018 when I bought it)
As a tool for reducing our societal need to do hard labor I think it is incredibly useful. As it is generally used in America I think it is an egregious from of creative theft that threatens to replace a large range of the working class in our nation.
agreed, I’m staying hopeful it’ll improve lives for most when used efficiently, at the cost of others losing jobs, sadly.
on the other hand, wealth inequality will worsen until policies change
I would probably be a bit more excited if it didn’t start coming out during a time of widespread disinformation and anti-intellectualism.
I just come here to share animal facts and similar things, and the amount of reasonably realistic AI images and poorly compiled “fact sheets”, and recently also passable videos of non-real animals is very disappointing. It waters down basic facts as it blends in to more and more things.
Stuff like that is the lowest level of bad in the grand scheme of things. I don’t even like to think of the intentionally malicious ways we’ll see it be used. It’s a going to be the robocaller of the future, but not just spamming our landlines, but everything. I think I could live without it.
I think it’s fine if used in moderation. I use mine for doing the mindless day-to-day stuff like writing cover letters or business-type emails. I don’t use it for anything creative though, just to free myself up to do that stuff.
I also suck at coding so I use it to write little scripts and stuff. Or at least to do the framework and then I finish them off.
Hype bubble. Has potential, but nothing like what is promised.
The trough of dissolutionment!
Mixed feelings. I decided not to study graphic design because I saw the writing on the wall, so I’m a little salty. I think they can be really useful for cutting back on menial tasks though. For example, I don’t see why people bitch about someone using AI for their cover letter as long as they proofread it afterwards. That seems like the kind of thing you’d want to automate, unlike art and human interaction.
I think right now I just kind of hate AI because of capitalism. Tech companies are trying to make it sound like they can do so many things they really can’t, and people are falling for it.
Writing a cover letter is a good exercise in self reflection
True, I just assumed that reflection was required in order to give the AI the prompt, and the AI was mainly used to format it correctly. I might be talking out of my ass here since I haven’t used it extensively.