this post was submitted on 03 Jun 2024
82 points (95.6% liked)

PC Gaming

8555 readers
417 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
all 22 comments
sorted by: hot top controversial new old
[–] Coasting0942@reddthat.com 24 points 5 months ago (3 children)

Exqueese me? How does AI impact electrical use? Cause last I heard we’re supposed to be cutting back on energy usage.

[–] Thrashy@lemmy.world 21 points 5 months ago

This is a reference to upscaling algorithms informed by machine learning a la Nvidia's DLSS -- seems like AMD is finally going to add the inference hardware to their GPUs that will let them close that technological gap with the competition. I'm guessing it won't come until RDNA5, though.

[–] QuadratureSurfer@lemmy.world 15 points 5 months ago

If you're trying to compare "AI" and electrical use you need to compare every use case to how we traditionally do things vs how any sort of "AI" does it. Even then we need to ask ourselves if there's a better way to do it, or if it's worth the increase in productivity.

For example, a rain sensor on your car.
Now, you could setup some AI/ML model with a camera and computer vision to detect when to turn on your windshield wipers.
But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that's normally reflected back it can activate the windshield wipers.
The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

On the other hand, I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper (another ML model) to quickly translate and transcribe what was said in a matter of seconds. In this case, Whisper uses less electricity.

In the context of this article we're talking about DLSS where Nvidia has trained a few different ML models for upscaling, optical flow (predicting where the pixels/objects are moving to next), and frame generation (being able to predict what the in-between frames will look like to boost your FPS).

This can potentially save energy because it puts less of a load on the CPU, and most of the work is being done at a lower resolution before upscaling it at the end. But honestly, I haven't seen anyone compare the energy use differences on this yet... and either way you're already using a lot of electricity just by gaming.

[–] kerrigan778@lemmy.world 13 points 5 months ago (1 children)

In this context it is being used to reduce rendering load and therefore be less intensive on computer resources.

[–] snooggums@midwest.social 9 points 5 months ago (3 children)

Techniques ro only render what is on screen has been a thing for decades.

[–] Brokkr@lemmy.world 14 points 5 months ago

This is kind of the opposite of that idea though. This is saying that not everything put on the screen needs to be computed from the game engine. Some of the content on the screen can be inferred from a predictive model. What remains to be seen is if that requires less computing power from the GPU.

[–] QuadratureSurfer@lemmy.world 11 points 5 months ago* (last edited 5 months ago) (1 children)

Yes, but with DLSS we're adding ML models to the mix where each one has been trained on different aspects:

Interpreting between frames
For instance, normally you might get 30FPS, but between the frames the ML model has an idea of what everything should look like (based off of what it has been trained on), so it can insert additional frames to boost your framerate up to 60FPS or more.

Upscaling (making the picture larger) - the CPU and other hardware can do work on a smaller resolution which makes their job easier, while the ML model here has been trained on how to make the image larger while filling in the correct pixels so that everything still looks good.

Optical Flow -
This ML model has been trained in motion which objects/pixels go where so that better prediction of frame generation can be achieved.

Not only that but Nvidia can update us with the latest ML models that have been trained on specific game titles using their driver updates.

While each of these could be accomplished with older techniques, I think the results we're already seeing speak for themselves.

Edit: added some sources below and fixed up optical flow description.

https://www.digitaltrends.com/computing/everything-you-need-to-know-about-nvidias-rtx-dlss-technology/
https://www.youtube.com/watch?v=pSiczcJgY1s

[–] baconisaveg@lemmy.ca 7 points 5 months ago (1 children)

It has yes, however the techniques Carmack used in Doom's engine probably don't have much of an impact on something like Cyberpunk 2077.

[–] snooggums@midwest.social -5 points 5 months ago (1 children)

The exact techniques, maybe not. But the fundamental approach of only rendering what you see has been continued since then.

[–] baconisaveg@lemmy.ca 2 points 5 months ago

Right, so what is the point in bringing it up?

"Sony just released a new 150 megapixel mirrorless digital camera!"

"Cameras have been a thing since the 1800's..."

[–] Even_Adder@lemmy.dbzer0.com 4 points 5 months ago

Here's an old Digital Foundry roundtable where Bryan Catanzaro, Vice President of Applied Deep Learning Research at Nvidia, talks about this kind of stuff.