this post was submitted on 19 Jan 2024
93 points (92.7% liked)
PC Gaming
8576 readers
234 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The point is that right now language models are only good at generating coherent text. They aren't at the level where they can control an NPC's behaviour in a game world. NPCs need to actually interact with the world around them in order to be interesting. That words that come out of their mouths are only part of the equation.
Yes, language models are good for text. That's their sole purpose. They can't control characters. There are other models for that, and they are obviously not language models.
>3d navigation models
>look inside
>language models
Well, they actually can, at least to an extent. All you need to do is encode the worldstate in a way the LLM can understand, then decode the LLM's response to that worldstate (most examples I've seen use JSON to good effect).
That doesn't seem to be the focus of most of these developers though, unfortunately.
That assumes the model is trained on a large training set of the worldstate encoding and understands what that worldstate means in the context of its actions and responses. That's basically impossible with the state of language models we have now.
I disagree. Take this paper for example - keeping in mind it's a year old already (using ChatGPT 3.5-turbo).
The basic idea is pretty solid, honestly. Representing worldstate for an LLM is essentially the same as how you would represent it for something like a GOAP system anyway, so it's not a new idea by any stretch.
Right, there's no possible way actions can be represented by a stream of symbols.
Did you watch the demo? The player literally told the bartender to break out the good stuff and he did just that…
They're a massive and combinatorially exploding part of the equation, though.
Imagine a world where instead of using AI to undermine writers and artists, we use it to explode their output. A writer could write the details that make a character unique, and the key and side quest dialogs that they write now, which could be used to customize a model for that character.
The player can now have realistic conversations with those characters that would make everything better. You could ask for directions to something and then follow it up with more questions that the NPC should know the answer to. Etc.
Now inconsequential filler characters, like a ramen shop owner in the example, become something potentially memorable but explicitly useful in a way that could never possibly be hand crafted.
This article is shitting on an incredible early attempt to allow for this by taking the fact that it's not done yet and crossing that with their biased opining and producing a kotaku-style click bait from it.
you do know that quality over quantity right? nobody likes bethesdas radiant fetch quests and this is that but with exposition dumping npcs
Not even just exposition. An NPC could easily go off script and start talking about stuff that breaks immersion. Like imagine you're sitting in a tavern in Skyrim and then some NPC comes up and is like "hey, you see any good movies lately?"