Weird, everybody talking about using AI for videogames seems to praise its ability to speed up the process of developing things. Big studio after studio getting caught with placeholders and whatnot. Does that really make your point? Because it seems to do the opposite.
My point isn’t AI is good or bad, but that the difference is how much it gets leaned on.
In this case, it’s how AI is (assumed to be) used at MS vs how it was used in the OP.
MS appears to be heavily leaning on generative AI producing code. In my own experience, that is pretty good these days at responding to a prompt with a series of actions that achieves the desire of the prompt, but is bad at creating an overall cohesion between prompts. It’s like it’s pretty good at making lego blocks but if you try putting it all together, it looks like you built something from 50 different sets, plus the connections between the blocks are flawed enough that it’s liable to collapse the more you put it together.
In the OP, AI is being used to submit bug reports. This one can be thought of as using an AI to write a book report instead of using an AI to write the book in the first place. If the AI writes a shitty report, it has zero effect on the book itself. But the AI might just include a list of all the typos in its report, which is useful for correcting the errors in the book.
Also, game studios forgetting to replace placeholders is yet another issue more on the process itself, though it can also show a lack of attention to detail and maybe indicate that an AI was handling more of the process. A decent system would flag all assets for whether they are placeholders or final and then include a review of all flags before publishing to catch something like this.
So this isn’t a general defense of using AI, I’m just saying that it’s possible to use it without everything it touches turning to slop, but that it often isn’t used like that, resulting in slop.
And it’ll be easy to fall into the slop trap, what with how it’s always making leaps and bounds inprovements that help with instances of it fucking up but don’t resolve the fundamental issues that will probably mean LLMs will always produce some sort of slop (because everything boils down to some sort of word association, just with a massive set of conditional probabilities encoded into it that gives it the illusion of understanding).
Weird, everybody talking about using AI for videogames seems to praise its ability to speed up the process of developing things. Big studio after studio getting caught with placeholders and whatnot. Does that really make your point? Because it seems to do the opposite.
My point isn’t AI is good or bad, but that the difference is how much it gets leaned on.
In this case, it’s how AI is (assumed to be) used at MS vs how it was used in the OP.
MS appears to be heavily leaning on generative AI producing code. In my own experience, that is pretty good these days at responding to a prompt with a series of actions that achieves the desire of the prompt, but is bad at creating an overall cohesion between prompts. It’s like it’s pretty good at making lego blocks but if you try putting it all together, it looks like you built something from 50 different sets, plus the connections between the blocks are flawed enough that it’s liable to collapse the more you put it together.
In the OP, AI is being used to submit bug reports. This one can be thought of as using an AI to write a book report instead of using an AI to write the book in the first place. If the AI writes a shitty report, it has zero effect on the book itself. But the AI might just include a list of all the typos in its report, which is useful for correcting the errors in the book.
Also, game studios forgetting to replace placeholders is yet another issue more on the process itself, though it can also show a lack of attention to detail and maybe indicate that an AI was handling more of the process. A decent system would flag all assets for whether they are placeholders or final and then include a review of all flags before publishing to catch something like this.
So this isn’t a general defense of using AI, I’m just saying that it’s possible to use it without everything it touches turning to slop, but that it often isn’t used like that, resulting in slop.
And it’ll be easy to fall into the slop trap, what with how it’s always making leaps and bounds inprovements that help with instances of it fucking up but don’t resolve the fundamental issues that will probably mean LLMs will always produce some sort of slop (because everything boils down to some sort of word association, just with a massive set of conditional probabilities encoded into it that gives it the illusion of understanding).