If you can run something on a pregnancy test, running it on a GPU is not really that impressive
Games
Video game news oriented community. No NanoUFO is not a bot :)
Posts.
- News oriented content (general reviews, previews or retrospectives allowed).
- Broad discussion posts (preferably not only about a specific game).
- No humor/memes etc..
- No affiliate links
- No advertising.
- No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
- No self promotion.
- No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
- No politics.
Comments.
- No personal attacks.
- Obey instance rules.
- No low effort comments(one or two words, emoji etc..)
- Please use spoiler tags for spoilers.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Other communities:
The doom pregnancy test was technically doom displaying on an oled screen added to the pregnancy test and it was running on a separate machine.
Wake me up when I can play it on a pacemaker.
Running it on a GPU + CPU isn't very impressive. Running it on just the GPU is a little more involved.
Recently someone even managed to make a proof of concept doom running on a neural network.
If I'm thinking of the same thing you are, I believe they were/are working on making biological neuron chips play a traditionally-running game of doom, less making doom run on a neural network.
Nope, a neural network:
https://arxiv.org/abs/2408.14837 "Diffusion Models Are Real-Time Game Engines"
We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: (1) an RL-agent learns to play the game and the training sessions are recorded, and (2) a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.
Wild. Neat!
GPU goes brrrr