this post was submitted on 18 Oct 2023
4 points (75.0% liked)

PC Gaming

8573 readers
278 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
 

The Nvidia NV1 was released in 1995, it was the first GPU with 3D capabilities for PC... form there we know how things went by.

Now it's 2023, so let's make some "retro futuristic" prediction... what would you think about a AI board, open source driver, open API as Vulkan which you can buy to power the AI for your videogames? It would make sense to you? Which price range it should be?

What's supposed to do for your games... well, that's depend on videogames. The quickiest example I can think of is having endless discussion with your NPC in your average, single player, Fantasy RPG.

For example, the videogame load your 4~5 companions with the psychology/behaviors: they are fixated with the main quest goal (like you talk with fanatic people, this to make sure the game the main quest is as much stable as possible) but you can "break them" by making attempt to reveal some truths (for example, breaking the fourth wall), and if you go for this path, the game warns that you're probably going to lock out the main quest (like in Morrowind when you kill essential NPC)

top 32 comments
sorted by: hot top controversial new old

AI dedicated boards already exist, and Nvidia can't produce them fast enough to keep up with demand.

Source: A senior AI engineer at AWS told me.

[–] RightHandOfIkaros@lemmy.world 6 points 1 year ago (2 children)

This just sounds like putting a second CPU on a PCIe board. I can't see this being a benefit for games because developers would never go through the pain of programming AI with advanced enough behaviours to even need a secondary CPU.

[–] thepianistfroggollum@lemmynsfw.com 0 points 1 year ago (1 children)

Programming AI is actually super easy, unless you decided to create your own foundation model. Even then, you would have data scientists building it, not devs.

Plenty of FMs and LLMs already exist that would be up to the task.

[–] RightHandOfIkaros@lemmy.world 0 points 1 year ago (1 children)

Programming AI with behaviour complex enough to need a second CPU would be hard. Syncing its output with the primary CPU could be a problem.

LLMs would not be useful for anything except maybe generating new dialogue, but it would need a lot of restraints to prevent the end user from breaking it. For the purposes of dialogue and story telling, most developers would opt to just pre-program dialogue like they always have.

Again, this sounds like a useless PC part that pretty much no game developer would ever take advantage of.

You don't need an LLM for this. You just need a FM that you fine tune, and you'd be surprised at how little computing power is actually required.

For our uses (which are similar to what OP wants), it takes longer for us to do an OCR scan on the documents our AI works with than for Sagemaker to do it's thing on a rather small instance.

And, devs would just be implementing API calls, so it wouldn't be a big deal to make the switch.

[–] howrar@lemmy.ca 0 points 1 year ago (1 children)

Why wouldn't they? It's a lot easier to write out intricate backstories for each character/location independently than it is to build decision trees for every possible combination of decisions that the player makes. That's basically what current LLMs allow for.

[–] conciselyverbose@kbin.social 2 points 1 year ago (1 children)

No, they definitely don't.

Even with how bad most video game writing is, current LLMs are laughably short of useful for the purpose you're implying and a game that replaced human writing with an LLM in real time would be a lock to be the worst written game ever made.

[–] howrar@lemmy.ca 1 points 1 year ago (1 children)

Current LLMs being bad at it doesn't mean they'll always be bad at it. Their current state is the worst they're ever going to be, and we're talking about a hypothetical future here. I don't see any reason why they can't be improved into a state usable for writing a story with all the worldbuilding details provided.

[–] conciselyverbose@kbin.social 1 points 1 year ago (2 children)

Your claim was about current LLMs.

But it's a fundamental limitation of what LLMs are. They are not AI. They do not have anything in common with intelligence, and they don't have a particularly compelling path forward.

They also, even if they weren't actually terrible for almost every purpose, are obscenely heavy and what we're calling "current" isn't something capable of being executed on consumer hardware, dedicated card or not.

Finally, the idea that they can't get worse is just as flawed. They're heavily poisoning the well of future training data, and ridiculous copyright nonsense has the very real possibility of killing training further even though training on copyrighted material doesn't in any way constitute copyright infringement.

[–] howrar@lemmy.ca 1 points 1 year ago

Right, I see where the confusion comes from. I mention current LLMs to say that the architecture and pre-training procedure we currently have produce models that are already capable of generating the type of outputs that can be used in this context. I make no claims about the quality of the output, but some additional fine-tuning on the game's specific story can take things very far.

When you say LLMs are not AI, I'm guessing what you mean is that they are not artificial general intelligence (AGI), and that I agree with. But AI is very broad, including things as simple as A* search. Decision trees aren't any more AGI than LLMs and they've been able to produce some very compelling stories, so this isn't a very good argument. We don't need AGI to write good stories.

The compute resources required for these models is something that can be fixed as well. On the hardware side, consumer hardware are continuously getting more powerful over time. On the software side, we're also seeing a lot of great results from the smaller 7b parameter models, and these are general purpose language models. If you just need something for your one game, you can likely distill the model into something much smaller.

The training data that we used for the current generation of LLMs are already out there and curated. We know that this dataset can achieve the performance of today's LLMs, and you can continue to train on that same data in the future. As long as you control where your new data comes from, this is not an issue.

[–] thepianistfroggollum@lemmynsfw.com 1 points 1 year ago (2 children)

Maybe open source LLMs aren't up to the task, but proprietary ones certainly are.

Also, you wouldn't really need a LLM, just a FM that you fine tune for your specific purpose.

[–] howrar@lemmy.ca 1 points 1 year ago (1 children)

What's this thing you call FM?

It's a foundation model. Basically it's the base algorithm that you train with data. LLMs are FMs that have been trained with an enormous amount of data, but they aren't necessary for every application, especially if you only need the AI/ML to perform a specific task.

Fine tuning an FM is just feeding it your own data.

[–] conciselyverbose@kbin.social 0 points 1 year ago (1 children)

No, they aren't. They aren't a little short of capable. You could multiply their capability overnight and have no shot of not immediately being the worst written game ever made.

There's a huge difference between stringing together words in the shape of a story and actually putting together something with a shrewd of cohesion. We're not talking mediocre here. We're talking laughably short of absolute dogshit.

[–] thepianistfroggollum@lemmynsfw.com 1 points 1 year ago (1 children)

Buddy, I have actual training in AI/ML from some of the leading engineers in the field, and my job leverages AI/ML very successfully to do a task really similar to what OP is looking for.

Maybe the versions available to the public to play with aren't up to the task, but using AWS Bedrock you can absolutely get results like OP wants.

[–] conciselyverbose@kbin.social 0 points 1 year ago (1 children)

You and the other 5 million companies hemorrhaging money on extremely heavy operations that are universally fucking terrible.

If you're willing to claim LLMs are even 1% of the way to what he asked for, you either have absolutely no clue what the tech is or you're a scammer trying to steal money from people.

The cutting edge of LLMs have nothing in common with intelligence.

[–] thepianistfroggollum@lemmynsfw.com 1 points 1 year ago (1 children)

Since you clearly can't read, I'm done discussing this with you. Maybe pick up a book and improve that reading comprehension a bit.

[–] conciselyverbose@kbin.social 0 points 1 year ago* (last edited 1 year ago)

I can read perfectly fine.

You claiming to be an expert when your assertion proves that it's literally impossible for you to be is simply not persuasive. You're doing the equivalent of claiming to be a geologist while arguing for a flat earth. It's inherently proof that you're full of shit.

[–] Omega_Jimes@lemmy.ca 2 points 1 year ago (1 children)

Yeah, so, dedicated hardware like that rarely ever pans out. I mean, graphics cards did, but there's not much of a market for gaming sound cards or physX cards anymore. I imagine that the specific type of AI that will be useful for this will eventually just be improved and made efficient enough that it'll be done by processors that already exist in your system.

[–] darkpanda@lemmy.ca 1 points 1 year ago

Yeah, like GPUs, which is basically what most LLMs are designed to run on now.

[–] norske@lemmynsfw.com 2 points 1 year ago

If the board provided enough benefit to outweigh the cost? Sure I might be talked into it.

Reminiscent of PhysX boards when they were a thing for 30 seconds. It’s all about the return on investment for me.

[–] squid@feddit.uk 2 points 1 year ago

Game publishers won't want direct ai in games, losses them too much control, also they can't use the excuse of its always online so NPCs have ai powered language. With how things look as everything is becoming subscription I doubt well be getting powerful ai on a single board to put into pci-e my prediction is more Aline to we won't have gaming PCs, GPUs will be price hiked and anyone wanting to game will be on a subscription service

[–] Blamemeta@lemm.ee 2 points 1 year ago (3 children)

Wouldn't that just be a GPU? That's literally what all our AIs run on. Just a ton of tiny little processors running in parallel.

[–] wccrawford@lemmy.world 7 points 1 year ago

That kind of like saying "Wouldn't that just be a CPU?" about the GPU. It can be optimized. The question is if it's worth optimizing for on a consumer level, like GPUs were.

[–] meteokr@community.adiquaints.moe 4 points 1 year ago (2 children)

While that is true now, in the future maybe there will be discrete hardware AI accelerators in the same way we have hardware video encoding.

[–] baconisaveg@lemmy.ca 3 points 1 year ago

Have you not seen the size of modern GPU's? It'll just be another chip on the 3.5 slot 600w CPU.

They already exist.

They're meaning something more along the lines of an ASIC. A board specifically engineered for AI/ML.

[–] BetaDoggo_@lemmy.world 1 points 1 year ago

If this were to ever become mainstream this would likely be incorporated into the GPU for cost reasons. Small machine learning acceleration boards already exist but their uses are limited because of limited memory. Google has larger ones available but they're cloud only.

Currently I don't see many uses in gaming other than upscaling.

[–] solariplex 1 points 1 year ago (1 children)

I mean, that kind of board has existed for a while. They're usually called AI-accelerator boards, IIRC

Yup. Nvidia can't make em fast enough to keep up with demand.

[–] BCsven@lemmy.ca 1 points 1 year ago* (last edited 1 year ago)

Is the 1995 and first 3d accurate? we were using 3d CAD tools in the range of 1991-1995 before Nvidia. Edit: seems S3 and Creative Labs had some earlier CAD cards, prices too high for general PC use till voodoo cards in 95