this post was submitted on 20 Oct 2024
27 points (76.5% liked)

PC Gaming

8576 readers
243 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] parpol@programming.dev 11 points 4 weeks ago (1 children)

LLM-powered NPCs will quickly fall out of fashion as people realize they're literally just talking to chatGPT.

The either forced always-online requirement with privacy violating telemetry for server-side LLMs, or immensely high GPU memory requirements for local LLMs will also cripple their games.

[–] slazer2au@lemmy.world 10 points 4 weeks ago (1 children)

immensely high GPU memory requirements for local LLMs will also cripple their games.

Not really, you can tune a llm to do what you want.

Why have a llm know about 17th century European politics or modern science when you are sticking it into a fantasy video game.

[–] parpol@programming.dev 10 points 4 weeks ago (2 children)

How small can you make an LLM before it starts having issues with grammar and coherency? I would argue that the bare minimum still would be rather large, and in videogames we're already using vram for other resources. In a 3D game especially I imagine very little vram is left to utilize.

[–] rikudou@lemmings.world 7 points 4 weeks ago

You'd be surprised how small you can go. That's IMO pretty much the future of AI - a shit ton of small specialized models. While the heavyweights have their use, they're way too expensive and overkill for specialized tasks.

Some small models can comfortably run on the CPU as well, games can easily detect whether you have VRAM to spare and use GPU or CPU based on that.

It's not there, yet, but what some of the small models can do is impressive. And if you train them extensively on fantasy scripts, I can see them generating NPC lines on the fly.

[–] slazer2au@lemmy.world 1 points 4 weeks ago (1 children)

Not sure, but what I am sure on is companies paying "ai engineers" (or whatever they are called) to trim them to a usable point instead of hiring a better writing team.

[–] thanks_shakey_snake@lemmy.ca 3 points 4 weeks ago

That's immensely expensive though, and not guaranteed to work because much of that stuff is still research stage. You're right that paring down the models to make them leaner and more specialized is the primary direction that current research is pursuing, but it's far from certain at this point how to do it, how well it will work, and how small you can get them before they start to fall apart. Not something game studios are likely to gamble their budgets on, at least not yet.

We're nowhere near the "just hire a guy to trim it down instead of hiring writers" stage, and it's unclear yet whether or not that's where we'll end up. We could pull off "just hire a guy to fine-tune an existing foundation model," but that doesn't make them smaller.