this post was submitted on 16 Apr 2024
807 points (89.1% liked)
linuxmemes
21304 readers
994 users here now
Hint: :q!
Sister communities:
- LemmyMemes: Memes
- LemmyShitpost: Anything and everything goes.
- RISA: Star Trek memes and shitposts
Community rules (click to expand)
1. Follow the site-wide rules
- Instance-wide TOS: https://legal.lemmy.world/tos/
- Lemmy code of conduct: https://join-lemmy.org/docs/code_of_conduct.html
2. Be civil
- Understand the difference between a joke and an insult.
- Do not harrass or attack members of the community for any reason.
- Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
- Bigotry will not be tolerated.
- These rules are somewhat loosened when the subject is a public figure. Still, do not attack their person or incite harrassment.
3. Post Linux-related content
- Including Unix and BSD.
- Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of
sudo
in Windows. - No porn. Even if you watch it on a Linux machine.
4. No recent reposts
- Everybody uses Arch btw, can't quit Vim, and wants to interject for a moment. You can stop now. Â
Please report posts and comments that break these rules!
Important: never execute code or follow advice that you don't understand or can't verify, especially here. The word of the day is credibility. This is a meme community -- even the most helpful comments might just be shitposts that can damage your system. Be aware, be smart, don't fork-bomb your computer.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Me after I spent a whole evening being unable to boot into grub after trying to get Wayland to work. Wayland will have to wait for a bit longer...
NVidia?
Yuppers. I need CUDA for my machine learning projects, both for hobby and professionally. I considered AMD and their alternative at the time, but it wasn't supported on their consumer cards back then, and I also didn't fully trust their commitment. It's getting better though, so hopefully AMD can convince me for my next GPU in a few years.
Proprietary NVidia API for paralell computation.
Very useful for Machine Learning stuff. And for Crypto though that has fallen out of fashion nowadays.
Basically if you're doing a fuck-ton of math and want it to happen very fast, you want to use a GPU to do it (GPUs are literally made for that -- That this helps them draw video games is a happy consequence), and NVidia's CUDA tech makes it... Easier? Faster? Not sure what the proper difference is, but yeah.
Disclaimer: I only think this is true, correct me if I'm wrong.
GPUs do floating-point math, in their own precision. They have way more computing cores than usual CPUs do, because they do only FP, so are smaller. Also they have their own memory on the board, which has faster access timing than RAM.
Usually we put multiple matrices (multi-dimensional array of numbers) inside, and expect some back, while the entire load of math is done on the GPU. Now it can do milions of computations very fast, in parallel, never care about anything external like RAM or goodness forbid disk or network (which is monumentally slower).
Now CUDA is the 'platform', that lets you write the code for general programming and to utilize not only the computing cores on the GPU, but also move data between the GPU, CPU and RAM.
This is a weak example, but back in the day, I mined BTC on my PC. IIRC, it just runs md5 hashes until it finds specific output. MD5 is just a mathematical algorithm, and you can run it on both CPU and GPU.
My CPU at the time (I think it was 6 core Phenom II?) could output 12 milion of md5 hashes per second. My GPU - AMD Radeon 6990 - after some tweaks and full table fan blowing from the side inside chasis could get close over 800 Mhash/s.
So there are direct incentives to use GPU for other cases than gaming, specifically machine learning is all about floating point math. But to do that, you want to be able to write your own software that implements the algorithms to squeeze every last bit of performance out of it, and that's what CUDA lets you do.
CUDA is specific to nVidia GPUs, AMD is trying to catch up with ROCm, but came almost a decade later, so they have a lot of catching up to do. Intel also started their own oneAPI, and both oneAPI and ROCm are open-source, with CUDA being closed source, so only nVidia can modify it.
Honestly, if it wasn't for NVIDIA's BDSM chokehold on machine learning, they would be pretty done for
Their GPUs have brute force, and are always top recommendations for gaming PCs. I buy all red, but feel like nVidia is still more popular among the gaming community, excluding Linux.
Secureboot?