I don’t hate AI itself, but the amount of AI slop ruining the internet gives me a negative feeling about it whenever I see it. I used to enjoy fucking around with the early pre-2021 GANs, diffusion models and GPT3 playground before ChatGPT was around and actually liked the crazy dreamlike nonsense they made, but now it all feels like dead soulless crap getting used to replace humans. Probably going to get super downvoted for admitting to ever liking AI image gens lmao
Actual AI for scientific research I’m OK with.
But AI shit crammed into literally everything. No sir, I hate it sir.
I dont work in IT so I may have misconceptions so I’m open to being corrected. What I dont understand is general AI/LLM usage. One place I frequent, one guy literally answers every post with “Gemini says…”. People just dont seem to bother thinking any more. AI/LLM doesnt seem to offer any advantage over traditional search & you constantly gave to fact check it. Garbage in, garbage out. Soon it’ll start learning from its own incorrect hallucinations & we won’t be able to tell right from wrong.
I have succumbed a couple of times. One time it actually helped (I’m not a coder, it was a coding related question which it did help with). With a self host/Linux permissions question it fucked up so badly I actually lost access to an external drive. Im no expert with linux, Im learning & managed to resolve it myself.
AI answers have been blocked from DDG on all my devices.
I was initially impressed when ChatGPT and Midjourney came out and I was playing around with them. But the novelty quickly wore off, and as I learned more about the flaws in how they operate and the negative environmental effects, the more I came to dislike it. Now, I actively hate and avoid AI.
I’ve been a luddite since long before AI. AI is black box engineering. It’s a shield they can and do use to create a malicious product.
As you pointed out, the most obvious use case is reducing cost of labor regardless of whether total labor is reduced.
I liked it when all it did was help you remember what options to use with
tarto unzip a tarball.This feels like how Bitcoin was neat for a time, before 99% of the space became pyramid schemes.
…And if “AI” is on the same trajectory as crypto, that’s not great, heh…
Image generation is fun, and LLMs can be a great way to find a starting point for learning something already known by humanity as a whole, but not known by one in particular. Because they are statistical association machines, they are practically perfect for answering the ‘what word am I looking for?’ question when you can only ‘talk around’ the concept.
However, that’s not what they are being used for, and the user cost does not match the externalized cost. If users had to pay the real cost today, the AI companies would die tomorrow. (This is probably true of a great many companies but we’re talking AI ones here.)
One of the concepts I keep returning to is ‘X was cool, but then the idiots got it.’ Early internet? Absolute nerdity; the only people on there were highly educated, usually intelligent as well, and the new people came at a pace the community could absorb. Then the idiots came, including business majors, kids, and eventually just everyone. Early mass media? Libraries of printed books. It was still expensive, so no one bothered making and distributing 3,000,000 copies of Ted from the pub’s musings on redheads, but as it became cheaper, and eventually even cheaper in electronic form, gates were no longer kept, and the idiots got in.
In this same way, AI in the form of statistical analysis tools has always been fascinating, and kind of cool. AI assisted radiology is great. Data analysis tools are great. But the idiots have the controls now, and they’re using them to put shrimp Jesus on their deep fake pizza, at the top of GPT-generated ‘articles,’ and we’re all paying the price for their fun in the form of uncountable subsidies, environmental damage, and societal damage.
I dedictated my science fair on machine learning. I think it was one of my special interests, since I learned linear algebra and calculus + some statistics to understand how to make one.
I tried making an AI that would learn how to play PuyoPuyo, but I used a single DQL neural network, so it was pretty bad 😅. It just spent its time putting the pieces on the side instead of actually getting more chains.
I even remember telling my classmates about ML during an unrelated presentation, telling them that they should get ready for a new era of advanced AI (I cringe everytime I remember this. Hopefully they forgot lol)
But yeah, chatGPT (or its predecessor. I remeber it being called something else) was pretty fun. I remember annoying my best friends by asking it te generate a donald trump speech on their favorite characters. They were verry annoyed, but nowadays they use AI like it’s their second brain haha.
Needless to say, I got to experience beforehand how capitalism ruins certain aspects of innovation. Don’t get me wrong, good progress has been made, but I feel like slapping LLMs on every AI problem (or transformers in general) is not the way to go, even if it pleases investors. There are many types of machine learning algorithms out there, for different types of problems. I feel like LLMs are a part of the puzzle, but not enough for AGI, and putting all the ressources into trying to make LLMs what it’s not best at doing a waste.
Also for the environmental side… All I’ll say is that companies knew the devestating effects on the environment that a large scale ML AI would have. I even remember feeling bad training my AI for the science fair, because I was essentialy leaving my computer open for hours running at max power. If a 14 year old knew this was bad for energy consumption and the environment, you bet google, meta and the rest knew as well.
I would also like to add as a bonus section, but for those that say they would never have been able to do X or Y without AI, hear me out for a second.
I’m not gonna go on a tirade about how you should’ve been able to, or that you’re a bad person for asking AI for help. Just trying to put a new perspective through it, as someone who used to use it compulsively (OCD be damned).
I’m not sure if it is the case for everyone, but at least for me, using AI was mostly an insecurity thing. Things I’d usually be more comfortable looking up the internet, documentation, or asking people, I’d just ask AI. I just thought “I’m not that good at reading docs/looking up stuff”, or “People will just get annoyed and bothered if I ask too many questions”. AI never gets annoyed and “listens”. Plus it’s a relatively good search engine replacement for the median people.
The only reason I’m bringing this up is because I’ve noticed similar behaviour from the kids I taught coding. They would ask chatGTP to generate code for a cool idea they had, because they didn’t feel like anything they would be able to do would be good enough. It’s like they felt that the stuff they would be able to do was too lame, so might as well generate the code (the hours of debugging time this caused 😭). To contrast to that, back when I started python, I was stoked to make a base 10 decomposition program that only went up to 10^3. It was fun to figure it out, and I wanted them to have fun seeing the ideas and ways they thought to implement it working. I also hear a lot of this almost self-defeatist attitude when people talk about using AI (and I’m not talking about in the workplace. That’s another can of worms).
When I worked on my OpenBSD server, I thought I was “too unskilled” to read the documentation for setting up ikeV2, so I asked AI. After several back and forth of frustratingly explaining the issue, only to get stuff that doesn’t make sense, I gave the documetation a more in depth read, and managed to figure out the issue. This type of exchange happened several times, and after realising I was just using it as a glorified rubber duckie, I just flat out deleted my account. I would’ve wasted less time, energy and ressources if I had the courage to ask for help, and I feel a bit more ashamed about not doing so earlier
All this to say: don’t be afraid to ask help on forums, look up stuff on the internet, or ask someone you know for help on a task. It doesn’t help that people are in a tight schedule nowadays, and that workplaces expect mote output because of AI. But if you do manage to find some time for your personal activities, don’t hesitate to take your time, and try some of the above.
I briefly toyed with some of the early image generation stuff. It was a fun toy for making NPCs for RPGs.
But now it’s everywhere and being used as an excuse to squeeze labor harder and deliver dubious value. If it just stayed as a toy I wouldn’t mind it much. I get annoyed at the aggressive “do you want me to rewrite that for you??” shit that pops up now.
I didn’t like the idea of generative AI ever, even less when it was marketed as a replacement for creativity. But I DID use it once, tryna see if I could do that thing some people did to manipulate it into getting me free game codes. It never worked
The history of the field has made important contributions to how modern computing works. Optimizing compilers, multitasking, and virtual machines all directly comes out of work that started with AI. Even if you don’t use all of these directly, you benefit from them just by using a computer built after 1980.
If you’re interested, I’d recommend Steven Levy’s “Hackers”, particularly the first two sections that are about MIT CSAIL. The third section is about the Sierra games studio, which has its own historical interest, but not really relevant here (and for various reasons, that part of the book hasn’t aged as well, IMO).
I don’t like the part of the field that has been weaponized against the working class. Which is almost everything that gets headlines right now. There are still good researchers doing good work who should be praised. They’re just not the ones “publishing papers” on Anthropic’s web site.
I wish they would quit calling it AI. It’s not.
I liked AI before GPT3
It was fun playing with a terrible image/word generator
I played around with it when it was a lot more abstract. I remember wombo had some neat art styles you could create images in. I didn’t realize home much shit was plageriszed though
I studied Machine Learning in college and was excited by the developments being made in Neural Networks.
I followed the tech closely the entire time, even today.
But once we got a good working general use LLM then marketing teams went fucking hog wild promising things that the tech wasn’t capable of, just because they knew they could trick idiots into thinking they had created “Artificial Intelligence” -_-
The tech is cool and revolutionary, but Machine Learning is still only capable of doing the things we were using it for before the LLMs got slapped onto them, and the use cases for LLMs are very limited too.
It’s overhyped and an inaccurate name since it isn’t intelligent in any way. A waste of water and electricity for work that can’t meaningfully replace any human work.
I didn’t formally study it, but yes! This is my take! I thought I kind of think I got bored with it when the simple chat interfaces came out. But it’s more than that. The marketing is horrible
Nope never liked it. Never will.





