This was honestly my biggest fear for a lot of FOSS applications.
Not necessarily in a malicious way (although there’s certainly that happening as well). I think there’s a lot of users who want to contribute, but don’t know how to code, and suddenly think…hey…this is great! I can help out now!
Well meaning slop is still slop.
Look. I have no problems if you want to use AI to make shit code for your own bullshit. Have at it.
Don’t submit that shit to open Source projects.
You want to use it? Use it for your own shit. The rest of us didn’t ask for this. I’m really hoping the AI bubble bursts in a big way very soon. Microsoft is going to need a bail out, openai is fucking doomed, and z/Twitter/grok could go either way honestly.
Who in their right fucking mind looks at the costs of running an AI datacenter, and the fact that it’s more economically feasible to buy a fucking nuclear power plant to run it all, and then say, yea, this is reasonable.
The C-whatever-O’s are all taking crazy pills.
A similar problem is happening in submissions to science journals.
AI crowd trying hard to find uses for AI
I think the open slop situation is also in part people who just want a feature and genuinely think they’re helping. People who can’t do the task themselves also can’t tell that the LLM also can’t do it.
But a lot of them are probably just padding their github account too. Any given popular project has tons of forks by people who just want to have lots of repositories on their github but don’t actually make changes because they can’t actually do it. I used to maintain my employer’s projects on GitHub and literally we’d have something like 3000 forks and 2990 of them would just be forks with no changes by people with lots of repositories but no actual work. Now these people are using LLMs to also make changes…
Sounds like they need a bot to check the code for AI telltales. Send AI to kill AI.
Sounds like an excellent use of power and water and cou cycles in data centers.
Well I mean it’s that or find more guys willing to go through it manually, which seems to be the problem since it’s open source. Unless they can scrounge up the money to hire people to do it full time.
It’s frequently hard to tell at a glance codegen slop, you actually have to look at it and understand what’s going on. An LLM that would produce such slop itself isn’t going to be effective at detecting such slop.
Get that code off of slophub and move it to Codeberg.
Is codeberg magically immune to AI slop pull requests?
No but they are actively not promoting it or encouraging it. Github and MS are. If you’re going to keep staying on the pro-AI site, you’re going to eat the consequences of that. Github are actively encouraging these submissions with profile badges and other obnoxious crap. Its not an appropriate env for development anymore. Its gamified AI crap.
No (just like Lemmy isn’t immune against AI comments) but Github is actively working towards AI slop
Godot is also weighing the possibility of moving the project to another platform where there might be less incentive for users to “farm” legitimacy as a software developer with AI-generated code contributions.
Aahhh, I see the issue know.
That’s the incentive to just skirt the rules of whatever their submission policy is.
This is big tech trying to kill FOSS.
People want AI, people get AI! Force feed your self with AI, thats what you wanted right? ask your self, what innovations have AI brought to us, apart from money to big corporate companies.
We now get a little digital idiot popping up in every program and OS saying it can help. It’s like Clippy but annoying.
I am a game developer and a web developer and I use AI sometimes just to make it write template code for me so that I can make the boilerplate faster. For the rest of the code, AI is soooo dumb it’s basically impossible to make something that works!
The context windows are only so large. Once you give it too much to juggle, it starts doing crazy shit.
Boilerplates are fine, they can even usually stub out endpoints.
Also the cheap model access is often a lot less useful than the enterprise stuff. I have access to three different services through work and even inside GPT land there are vast differences in capability.
Claude Code has this REALLY useful implementation of agents. You can create agents with their own system prompts. Then the main context window becomes an orchestrator; you tell it what you’re looking for and tell it to use the agents to do the work. The main window becomes a project manager with a mostly empty context window, it farms out the requests to the agents which each have their own context window. Each new task is individual, The orchestrator makes sure the agents get the job done, none of the workloads get so large that stuff goes insane.
It’s still not like you can say, go make me this game then argue with it for a couple of hours and end up with good things. But if you keep the windows small, it can crap-out a decent function/module if you clarify you want to focus on security, best practice, and code reusability. They’re also not bad at writing unit tests.
Something like speckit is necessary to make big, sweeping changes that continue past the context window
Yes I feel like many people misunderstand AI capabilities
They think it somehow comes up with the best solution, when really it’s more like lightning and takes the path of least resistance. It finds whatever works the fastest, if it even can without making it up and then lying that it works
It by no means creates elegant and efficient solutions to anything
AI is just a tool. You still need to know what you are doing to be able to tell if it’s solution is worth anything and then you still will need to be able to adjust and tweak it
It’s most useful for being able to maybe give you an idea on how to do something by coming up with a method/solution you may not have known about or wouldn’t have considered. Testing your own stuff as well is useful or having it make slight adjustments.
Works in this case doesn’t mean the output works but that it passes the input parameter rules.
It finds whatever works the fastest
For a very lax definition of “works”…
So I guess it is time to switch to a different style of FOSS development ?
The cathedral style, which is utilized by Fossil, basically in order to contribute you’ll have to be manually included into the group. It’s a high-trust environment where devs know each other on a 1st-name basis
What if I want to contribue to a FoSS project because I’m using it but I don’t want to make new friends?
But at least it’s not AI-Slop
That is a wonderful method because it works in a similar way of many FediVerse server administrators admitting people to new accounts. This way is the slop is immediately filtered away
But what if the code is your own and super embarrassing?
Why would your code be embarassing ? Yes I get but so what
I think moving off of GitHub to their own forge would be a good first step to reduce this spam.
To Codeberg we go!
Codeberg is cool but I would prefer not having all FOSS project centralised on another platform. In my opinion projects of the size of Godot should consider using their own infrastructure.
Hosting a public code repo can be expensive, however they can run a private repo using Forgejo and mirror to Codeberg to create redundancy and have public code that doesn’t eat so much monthy revenue, if they even have revenue.
Back to sourceforge it is then.
Don’t underestimate legitimate contributions from people who only do it because they already have an account.
Let’s be realistic. Not everyone is going to move to Codeberg. Godot moving to Codeberg would be decentralizing.
It’s discussed in the Bluesky thread but the CI costs are too high on Gitlab and Codeberg for Godot‘s workflow.
That’s a shame. Did they take the wasted developer time dealing with slop into account in that discussion?
Stupid question:
Are there really no safe guards to the merging process except for human oversight?
Isnt there some “In Review State” where people who want to see the experimental stuff, can pull this experimental stuff and if enough™ people say “This new shit is okay” it gets merged?
So the Main Project doesnt get poisoned and everyone can still contribute in a way and those who want to Experiment can test the New Stuff.
It is my understanding that pull requests say “Hey, I forked and modified your project. Look at it and consider adopting my changes in your project.” So anyone who wants to look at the “experimental stuff” can just pull that fork. Someone in charge of the main branch decides if and when to merge pull requests.
The problem becomes the volume of requests; they’re kinda getting DDOS’d.
Yup! Replace the word “fork” with “branch” and that basically matches the workflow. Forking implies you are copying the code in its current state and going off to do your own thing, never to return (but maybe grabbing updates from time to time).
One would hope that the users submitting these PRs vetted to LLM’s output before submitting, but instead all of that work is getting shifted onto the maintainers.
Many do have automated checking, testing, rules for the PR maker to follow and such.
The issue is that these submitters are (often) drive-by spammers. They aren’t honest, they don’t care about the project, they just want quick kudos for a GitHub PR on a major project.
Filtering a sea of scammers is a whole different ballgame than guiding earnest, interested contributors.
Most projects don’t have enough people or external interest for that kind of process.
It would be possible to establish some tooling like that, but standard forges don’t provide that. So it’d feel cumbersome.
And in the end you’re back at having contributors, trustworthiness, and quality control. Because testing and reviewing are contributions too. You don’t want just a popularity contest (I want this) nor blindly trust unknown contribute.
You can always checkout the branch and run it yourself.
It would be nice to bump upthe useful stuff through the community but even then there could be bot accounts that push the crap to the top
Time to become a plumber!
Codeberg Anubis when?
there is?
I’m ignorant 😅 I don’t use either. I guess it doesn’t really defend against browser-remote-controlling bot agents.

















