There are some fundamental obstacles to that. I don’t want, for instance, that a game AI does that which I tell it to do. I want to be surprised and presented with situations I haven’t considered. However, LLMs replicate language and symbol patterns according to how they are trained. Their tendency is to be cliche, because cliche is the most expected outcome of any narrative situation.
There is also the matter that ultimately LLMs do not have a real understanding and opinions about the world and themes. They can give us description of trees, diffusion models can get us a picture of a tree, but they don’t know what a tree is. They don’t have the experiential and emotional ability to make their own mind of what a tree is and represents, they can only use and remix our words. For them to say something unique about trees, they are basically randomly trying stuff until something sticks, without no real basis of their own. We do not have true generalized AI to have this level of understanding and introspection.
I suppose that sufficiently advanced and thorough modelling might give them the appearance of these qualities… but at that point, why not just have the developers write these worlds and characters? Sure that content is much more limited than the potentially infinite LLM responses, but as you wring eternal content from an LLM, most likely you are going to end up leaving the scope of any parameters back into cliches and nonsense.
To be fair though, that depends on the type of game we are talking about. I doubt that a LLM’s driven Baldur’s Gate would be anywhere as good as the real thing by a long margin. But I suppose it could work for a game like Animal Crossing, where we don’t mind the little characters constantly rambling catchphrases and nonsense.
AI used to not even be able to do that a year or so ago, give it time and it’ll get there.
There are some fundamental obstacles to that. I don’t want, for instance, that a game AI does that which I tell it to do. I want to be surprised and presented with situations I haven’t considered. However, LLMs replicate language and symbol patterns according to how they are trained. Their tendency is to be cliche, because cliche is the most expected outcome of any narrative situation.
There is also the matter that ultimately LLMs do not have a real understanding and opinions about the world and themes. They can give us description of trees, diffusion models can get us a picture of a tree, but they don’t know what a tree is. They don’t have the experiential and emotional ability to make their own mind of what a tree is and represents, they can only use and remix our words. For them to say something unique about trees, they are basically randomly trying stuff until something sticks, without no real basis of their own. We do not have true generalized AI to have this level of understanding and introspection.
I suppose that sufficiently advanced and thorough modelling might give them the appearance of these qualities… but at that point, why not just have the developers write these worlds and characters? Sure that content is much more limited than the potentially infinite LLM responses, but as you wring eternal content from an LLM, most likely you are going to end up leaving the scope of any parameters back into cliches and nonsense.
To be fair though, that depends on the type of game we are talking about. I doubt that a LLM’s driven Baldur’s Gate would be anywhere as good as the real thing by a long margin. But I suppose it could work for a game like Animal Crossing, where we don’t mind the little characters constantly rambling catchphrases and nonsense.