So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you, because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike
Lol this works in a way the author probably didn’t intend. They are wearing extremely dangerous tools that were never really a great idea. They’ll code some circles, set their legs on fire, and crash into a wall.
And when the inevitable production issue occurs, we all have to clean up, senior or not.
So sick of this doomer BS. I went and had a look at Linus’ “FoSS vibe coded project” that everyone’s been flipping their shit over.
- It’s only the python UI
- It’s gluing together matlib and pandas
- It’s written like a crazy person and I would absolutely reject that if it was a PR.
Like, it apparently worked for Linus and he also doesn’t care to learn python at all. But I was under the impression that these things are supposed to be good at python.
This was completely unhinged garbage that I’m shocked even worked. It created the same function twice one after the other. We have nested python functions.
We have these unhinged guard conditions where if navigation is true we return. THEN we immediately set it to be true and at the end of the function we set it to false again. I thought I was high reading that code. If you legitimately think these things are better at writing code then you are, you suck at writing code.
Nah, those tools don’t work. Trust me, I’m a Zoomer, I tried them, and they suck !
When it comes to “Choose two : Good, done Fast, or Cheap” I’m still choosing “Good” twice.
And laughing at all the fools running into a wall with their AIs, not realizing all they did is choose “Fast” twice instead, but neither Good nor even Cheap.
Eh, no?
I still haven’t seen AI produce anything remotely reliable within a single function, let alone put multiple functions together, let alone build something with multiple classes, let alone something actually useful, let alone a big project
Yeah, AI got useful as a rubber Ducky. I use it for getting a sense of possible directions, maybe a fresh idea I hadn’t considered sometimes. It saves me from opening up DDG and Ctrl-clicking the first ten results to check all the pages, sometimes.
But the AI coder inside my IDE still gets code confidently wrong about 70% of the time, and we’re talking single lines here and the mistakes are fundamental, like variables that haven’t been initialized.
Having said that, I’m sure that someday someone will.come up with an AI that can do real development and that day it’ll be able to develop itself and that day we’ll all be properly fucked because that will really quickly delve into something we can’t control and something more intelligent than all of us.
I’m sure some tech bros can’t wait for that to happen and I honestly believe we need to rid the world of these idiots before they doom and destroy us all.
I built a full stack SaaS that is deployed at my work. It is exposed to the internet and I have only used pentesting and asking the ai “what is this” and “fix this” and feature requests.
It has awful context limitations. Saying “do this” means it overfills context halfway through and loses the nuance as it tries to restart the task after summary. I dont trust it to make a todo list and keep to it. I have to work with the slightly long term “markdown files” as memory.
I have had good progress when I say “add this pentest_failure/feature_request to an open items list markdown file” then the ai finds context defines the issue and updates the file. Rinse repeat. THEN I say “I want to make a refactor that will fix/implement as many of the open items list issues as possible, can you/the_ai make a refactoring spec”. THEN I carefully review the business logic in the refactoring spec THEN I tell the ai to implement the refactoring spec phase 1 then i test then j say do phase 2… etc.
Design concerns like single source of truth, dry, separation of concerns, and yagni have come up. I have asked about api security best practices. I have asked about test environment vs production.
I developed without git, and the sheer amount of dumb duct tape code made by no short term memory ai exposed by pentesting was infuriating, but I got a process that works for my level of understanding.
Ai Skills, rules, etc are still not quite clear to me
I disagree about this:
And yet here we are. The worst fact about these tools is that they work. They can write code better than you or I can, and if you don’t believe me, wait six months.
I dealt with apache santuario for some xmldsign code and damn, documentation is horrible, only some things in stack overflow worked, but the java docs saved me, I just needed to think about it and rewrite, rewrite and rewrite so the code was legible. Whenever I asked any LLM, they “wrote” code directly from 2005, junior level style, using code patterns directly from the depths of time.
Edit: These LLMs are dumb as fuck, any non standard thing or completely new things they just shit their datacenter pants and just throws garbage at the screen.
Yeah. I think the only people saying that LLMs can write better code than “us” are the ones who can’t write good code themselves. And thanks to the Dunning Kruger effect, they overestimate their own skill and think they can speak for the rest of us.
Yep, and the newer ones are getting worse.
deleted by creator
i’m in embedded systems. i’ve yet to see an llm manage to do anything even remotely useful in anything close to my field. and i don’t predict them being able to anytime soon, because everything is proprietary and locked down to single vendors.
They’re useless in video games also.
Same, I’m in automotive embedded, and at best, LLMs are helpful for generating unit tests. No one trusts them to make good memory-safe code
So, are you selling out your niche programming community to train the model that replaces you? Or are you waiting for one of your competitor to do it first?
well since it’s all locked down there’s nothing to train on. and if you find something it’s usually specific to one device and doesn’t transfer.
it’s all locked down there’s nothing to train on.
Not true, it can train on you.
not really, i’m not allowed to share what i do
Well that’s the kind of quitter attitude they aren’t looking for. I guess you’re getting sold out by one of your competitors.
well they don’t share their stuff either
Oh, your field is perfectly secure from AI due to your amazing diligence… Good work…
Writing code with an LLM is often actually less productive than writing without.
Sure for some small tasks it might poop out an answer real quick and it may look like something that’s good. But it only looks like it, checking if it is actually good can be pretty hard. It is much harder to read and understand code, than it is to write it. And in cases where a single character is the difference between having a security issue and not having one, it’s very hard to spot those mistakes. People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.
Then there’s all the cases where the LLM messes up and doesn’t give a good answer, even after repeated back and forth. Once the thing is stuck in an incorrect solution, it’s very hard to get it out of there. Especially once the context window runs out, it becomes a nightmare after that. It will say something like “Summarizing conversation”, which means it deletes lines from the conversation that are deemed superfluous, even if those are critical requirement descriptions.
There’s also the issue where an LLM simply can’t do a large complex task. They’ve tried to fix this with agents and planning mode and such. Breaking everything down into smaller and smaller parts, so it can be handled. But with nothing keeping the overview of the mismatched set of nonsense it produces. Something a real coder is expected to handle just fine.
The models are also always trained a while ago, which can be really annoying when working with something like Angular. There are frequent updates to Angular and those usually have breaking changes, updated best practices and can even be entire paradigm shifts. The AI simply doesn’t know what to do with the new version, since it was trained before that. And it will spit out Stackoverflow answers from 2018, especially the ones with comments saying to never ever do that.
There’s also so much more to being a good software developer than just writing the code. The LLM can’t do any of those other things, it can just write the code. And by not writing the code ourselves, we are losing an important part of the process. And that’s a muscle that needs flexing, or skills rust and go away.
And now they’ve poisoned the well, flooding the internet with AI slop and in doing so destroying it. Website traffic has gone up, but actual human visits have gone done. Good luck training new models on that garbage heap of data. Which might be fine for now, but as new versions of stuff gets released, the LLM will get more and more out of date.
People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.
It helps me code faster, but I really only outsource boilerplate to an LLM. I will say it also helps with learning the syntax for libraries I’m unfamiliar with just in that I don’t have to go through several pages of documentation to get the answers I need in the moment. The speed-up is modest and nowhere near the claims of vibe coders.
Because this comes up so often, I have to ask, specifically what kind of boilerplate? Examples would be great.
Totally fair question. One of my go-to examples is for a lot of data visualization stuff, just having an LLM spit out basic graphs with the parameters in the function call. Same with mock-ups of basic user interfaces. I’m not a front-end person at all, and I usually want something basic and routine (but still time consuming), like CRUD or something, so just prompting for that and getting a reasonably decent product is a helpful time saver.
For anything more than basic stuff, I don’t think I’ve ever gotten more than a single small function that I then verify line by line.
I always wonder this as well… I will use tools to help me write some repetitive stuff periodically. Most often I’ll use a regex replace but occasionally I’ll write a little perl or sed or awk. I suspect the boilerplate these people talk about are either this it setting up projects, which I think there are also better tools for
My experience as well.
I’ve been writing Java lately (not my choice), which has boilerplate, but it’s never been an issue for me because the Java IDEs all have tools (and have for a decade+) that eliminate it. Class generation, main, method stubs, default implementations, and interface stubs can all be done in, for example: Eclipse, easily.
Same for tooling around (de)serialization and class/struct definitions, I see that being touted as a use case for LLMs; but like… tools have existed[1] for doing that before LLMs, and they’re deterministic, and are computationally free compared to neural nets.
But then your junior colleagues will eventually code circles around you…
Probably not. I just ran into a dude who suggested using LLMs to fix misplaced braces in source code.
if you don’t believe me, wait six months
We are now on year…three? Of this mantra.
Code quality will not improve.
- People’s expectations will go down.
- Normal programmers will be phased out of the industry.
- Educational systems will change, leading to no normal programmers for OSS.
- Once Gen AI companies get enough power, they will start increasing rates, citing costs.
- Vibe coding will become as expensive and time taking as current programming, but giving worse output.
- It will be tagged “inflation”.
Source?
The same thing is currently happening with all factory type industries, which were originally workshop based and customisation friendly.- They got overthrown by mass-production models, due to lower costs.
- Now mass produced stuff has lower creation costs but has higher shipping costs, so no real benefit after forgoing quality for quantity.
- All stuff is more expensive to buy than it would have been with the workshop based industries being dominant.
- This causes increase in living expenses for everyone including the people working in the few remaining workshop producers
- Now workshop stuff is again more expensive than mass-produced stuff and can only be considered when one has enough stability and saving ability.
- And lack of workshops will mean that even those are hard to find.
“Coding but for non-programmers” has been a thing for a while in the industry. Business rules engines were a big one. The promise is always the same: Non-technical folks/business folks will be able to use this. They never are. Devs still have to do the hard hard, now in a weird thing that isn’t quite coding.
Wasn’t the real reason for that, that all of those things still require the user to treat it is a logic system and feed proper logic in some or the other way (which might be not coding, but is still programming)?
And since the non-technical people are actually bad at logic itself, it was useless to them. I would see those tools being useful to any other kind of engineer that is not into coding (like a civil/mechanical engineer), who can do logic, but not into coding or even computers.
On the other hand, the user treats AI is a non-logical system, kinda like another person. And I see the business/marketing/hype types getting better at it. I, for one, find it much easier and faster using a hard logic system (the harder, the better) and programming suits me, while I find AI exhausting.It’s hard to paint it in broad strokes, but yeah that was part of it. The one that really comes to mind for me is this thing called ilog which tried to map phrases in English to code (sort of like Gherkin does for tests, but I actually like Gherkin). It effectively hid very important logic for how the system worked in this really weird layer that you had to use a special IDE for that was super difficult to get working properly. I remember that seeing the text descriptions was sort of easy but seeing what actually happened was really difficult. There was a view that would actually give you something that was like code but it was just too difficult to get to. Even then, it was something generated, not something you could edit.
I’ve sort of thought about this a lot because it’s fascinating to me. I think the best option for stuff like this if you want to really pursue it is to use “beginner friendly” languages (Python comes to mind, despite me hating it lol) with some sort of easy web interface to upload and download them. Maybe use JavaScript since it works nice in the browser and can be run right there for tests or whatever. Make some sort of sandbox to limit what can be done or just have devs more actively review it (maybe a PR process). Maybe even have the webtool just be a front end for a tool that interacts with git (or some forge like GitHub specifically if it needs to do stuff like opening pull requests).
While Python seems fine to me, JS is where it becomes too un-hard for me.
I have to use QML quite frequently in my work (which is based on JS) and make a point of including as less program logic as possible in QML and transfer any input to C++ code in as few steps as I can. Essentially keeping QML just for Markup and leaving the program to C++.
I mourn it too, because there are dumbasses like this who are buying the hype and setting the industry back by decades as a result.
I grieve. I came to software development because I wanted to write. I studied literature in school, and I wanted to put pen to paper, to read and write and to communicate with other humans about the things that matter to me. But the academy graduates far more literature students than there are positions for writers and teachers, and the market promised me that if I learned to write code instead of human language, I would never want for a steady paycheck. And so I dug in, creating open source projects on GitHub to make a portfolio, and reassuring myself that software development was primarily a way that humans communicated with each other about how we want the systems around us to work. I had hope — I could write C# to talk to my coworkers about how health care should work, or Java to talk about how financial systems should work, or JavaScript to talk about how people should find jobs, and the fact that a computer was eavesdropping was incidental to the project.
And I think what is so mournful for me is that my voice is preserved forever in these LLMs. They’ve scraped everything I’ve ever done, and joined me into the shared song of all of humanity and lowered the barriers to entry for new software developers and everything should be beautiful and shared, but instead of it being the great equalizer that allows users to finally generate the custom software for themselves they’ve always wanted, and never been able to articulate to me, we’ve stolen the entire intellectual history of the human race to help a small cadre of fascists burn the planet. I fell for it, completely, and that’s what I grieve for most.
A junior with a LLM is like a gorilla with a sledge hammer. Good luck getting to to do anything useful. You’ll need even more luck getting to not break anything.
LOL








