this post was submitted on 18 Oct 2023
4 points (75.0% liked)
PC Gaming
8576 readers
219 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This just sounds like putting a second CPU on a PCIe board. I can't see this being a benefit for games because developers would never go through the pain of programming AI with advanced enough behaviours to even need a secondary CPU.
Programming AI is actually super easy, unless you decided to create your own foundation model. Even then, you would have data scientists building it, not devs.
Plenty of FMs and LLMs already exist that would be up to the task.
Programming AI with behaviour complex enough to need a second CPU would be hard. Syncing its output with the primary CPU could be a problem.
LLMs would not be useful for anything except maybe generating new dialogue, but it would need a lot of restraints to prevent the end user from breaking it. For the purposes of dialogue and story telling, most developers would opt to just pre-program dialogue like they always have.
Again, this sounds like a useless PC part that pretty much no game developer would ever take advantage of.
You don't need an LLM for this. You just need a FM that you fine tune, and you'd be surprised at how little computing power is actually required.
For our uses (which are similar to what OP wants), it takes longer for us to do an OCR scan on the documents our AI works with than for Sagemaker to do it's thing on a rather small instance.
And, devs would just be implementing API calls, so it wouldn't be a big deal to make the switch.