this post was submitted on 18 Oct 2023
4 points (75.0% liked)

PC Gaming

8576 readers
320 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
 

The Nvidia NV1 was released in 1995, it was the first GPU with 3D capabilities for PC... form there we know how things went by.

Now it's 2023, so let's make some "retro futuristic" prediction... what would you think about a AI board, open source driver, open API as Vulkan which you can buy to power the AI for your videogames? It would make sense to you? Which price range it should be?

What's supposed to do for your games... well, that's depend on videogames. The quickiest example I can think of is having endless discussion with your NPC in your average, single player, Fantasy RPG.

For example, the videogame load your 4~5 companions with the psychology/behaviors: they are fixated with the main quest goal (like you talk with fanatic people, this to make sure the game the main quest is as much stable as possible) but you can "break them" by making attempt to reveal some truths (for example, breaking the fourth wall), and if you go for this path, the game warns that you're probably going to lock out the main quest (like in Morrowind when you kill essential NPC)

you are viewing a single comment's thread
view the rest of the comments
[–] conciselyverbose@kbin.social 1 points 1 year ago (9 children)

Your claim was about current LLMs.

But it's a fundamental limitation of what LLMs are. They are not AI. They do not have anything in common with intelligence, and they don't have a particularly compelling path forward.

They also, even if they weren't actually terrible for almost every purpose, are obscenely heavy and what we're calling "current" isn't something capable of being executed on consumer hardware, dedicated card or not.

Finally, the idea that they can't get worse is just as flawed. They're heavily poisoning the well of future training data, and ridiculous copyright nonsense has the very real possibility of killing training further even though training on copyrighted material doesn't in any way constitute copyright infringement.

[–] thepianistfroggollum@lemmynsfw.com 1 points 1 year ago (7 children)

Maybe open source LLMs aren't up to the task, but proprietary ones certainly are.

Also, you wouldn't really need a LLM, just a FM that you fine tune for your specific purpose.

[–] howrar@lemmy.ca 1 points 1 year ago (1 children)

What's this thing you call FM?

It's a foundation model. Basically it's the base algorithm that you train with data. LLMs are FMs that have been trained with an enormous amount of data, but they aren't necessary for every application, especially if you only need the AI/ML to perform a specific task.

Fine tuning an FM is just feeding it your own data.

load more comments (5 replies)
load more comments (6 replies)