• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    edit-2
    1 day ago

    Take-Two’s CEO doesn’t think a Grand Theft Auto built with AI would be very good | VGC

    Sounds fair to me, at least for near-term AI. A lot of the stuff that I think GTA does well doesn’t map all that well to what we can do very well with generative AI today (and that’s true for a lot of genres).

    He added: “Anything that involves backward-looking data compute and LLMs, AI is really good for, and that and that applies to lots of things that we do at Take-Two. Anything that isn’t attached to that, it’s going to be really, really bad at…. there is no creativity that can exist, by definition, in any AI model, because it is data driven.”

    To make a statement about any AI seems overly strong. This feels a little like a reformed “can machines think?” question. The human mind is also data-driven; we learn about the world, then create new content based on that. We have more sophisticated mechanisms for synthesizing new data from our memories than present LLMs do. But I’m not sure that those mechanisms need be all that much more complicated, or that one really requires human-level synthesizing ability to be able to create pretty compelling content.

    I certainly think that the simple techniques that existing generative AI uses, where you just have a plain-Jane LLM, may very well be limiting in some substantial ways, but I don’t think that holds up in the longer term, and I think that it may not take a lot of sophistication being added to permit a lot of functionality.

    I also haven’t been closely following use of AI in video games, but I think that there are some games that do effectively make use of generative AI now. A big one for me is use of diffusion models for dynamic generation of illustration. I like a lot of text-based games — maybe interactive fiction or the kind of text-based choose-your-own-adventure games that Choice of Games publishes. These usually have few or no illustrations. They’re often “long tail” games, made with small budgets by a small team for a niche audience at low cost. The ability to inexpensively illustrate games would be damned useful — and my impression is that some of the Choice Of games crowd have made use of that. With local computation capability, the ability to do so dynamically would be even more useful. The generation doesn’t need to run in real time, and a single illustration might be useful for some time, but could help add atmosphere to the game.

    There have been modified versions of Free Cities (note: very much NSFW and covers a considerable amount of hard kink material, inclusive of stuff like snuff, physical and psychological torture, sex with children and infants, slavery, forced body modification and mutilation, and so forth; you have been warned) that have incorporated this functionality to generate dynamic illustrations based on prompts that the game can procedurally generate running on local diffusion models. As that demonstrates, it is clearly possible from a technical standpoint to do that now, has been for quite some months, and I suspect that it would not be hard to make that an option with relatively-little development effort for a very wide range of text-oriented games. Just needs standardization, ease of deployment, sharing parallel compute resources among software, and so forth.

    As it exists in 2025, SillyTavern used as a role-playing software package is not really a game. Rather, it’s a form of interactive storytelling. It has very limited functionality designed around making LLMs support this sort of thing: dealing with a “group” of characters, permitting a player to manually toggle NPC presence, the creation of “lorebooks”, where tokens showing up trigger insertion of additional content into the game context to permit statically-written information about a fictional world that an LLM does not know about to be incorporated into text generation. But it’s not really a game in any traditional sense of the word. One might create characters that have adversarial goals and attempt to overcome those, but it doesn’t really deal well with creating challenges incredibly well, and the line between the player and a DM is fairly blurred today, because the engine requires hand-holding to work. Context of the past story being fed into an LLM as part of its prompt is not a very efficient way to store world state. Some of this might be addressed via use of more-sophisticated AIs that retain far more world state and in a more-efficient-to-process form.

    But I am pretty convinced that with a little work even with existing LLMs, it’d be possible to make a whole genre of games that do effectively store world state, where the LLM interacts with a more-conventionally-programmed game world with state that is managed as it has been by more traditional software. For example, I strongly suspect that it would be possible to glue even an existing LLM to something like a MUD world. That might be via use of LoRAs or MoEs, or to have additional “tiny” LLMs. That permits complex characters to add content within a game world with rules defined in the traditional sense. I think I’ve seen one or two early stabs at this, but while I haven’t been watching closely, it doesn’t seem to have real, killer-app examples…yet. But I don’t think that we really need any new technologies to do this, just game developers to pound on this.