this post was submitted on 24 Oct 2023
43 points (97.8% liked)

Linux

48240 readers
502 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
43
submitted 1 year ago* (last edited 11 months ago) by neogeo@sh.itjust.works to c/linux@lemmy.ml
 

Hey all! This is my first post, so I'm sorry if anything is formatted incorrectly or if this is the wrong place to ask this. Recently I've saved up enough to upgrade my graphics card ($350 budget). I've heard great things about amd on linux and appreciate open source drivers so as to not be at the mercy of nvidia. My first choice of graphics card was a 6700xt, but then I heard that nvidia had significantly higher performance in terms of workstation tasks (not to mention the benefits of cuda and nvenc) and have been looking into a 3060 or 3060 ti. I do a bit of gaming in my free time, but its not my top priority, and I can almost guarantee that any option in this price range will be more than enough for the games I play. Ultimately my questions come down to:

  1. Would nvida or amd provide more raw performance on linux for my price range?
  2. Which would be better for productivity cuda encoding etc. (I mainly use blender, freecad, and solidworks, but I appreciate having extra features for any software that I may use in the future).
  3. What option would work best after a few years? (I've seen amd increase rheir performance with driver updates before, but the nvk driver also looks promising. I also host some servers and tend to cycle my componenta from my main system into my proxmox cluster).

Also a bit more details to hopefully help with any missing info: My current system is a Ryzen 7 3700x, gtx 1050 ti, 32gb ram, 850 watt psu, and nvme ssd. I've only ever used nvidia cards, but amd looks like a great alternative. As another side note, if there's any way to run cuda apps on amd I plan on running my new gpu alongside my old one so nvenc is not too much of a concern.

Thanks in advance for any thoughts or ideas!

Edit 1: thanks so much for all of the feedback! I'm not going to purchase a gpu quite yet but probably in a few weeks. First I'll be testing wayland with my 1050 ti and just researching how much I need each feature of each gpu. Thanks again for all of your feedback, I'll update the post when I do order said gpu.

Edit 2: I made an interesting decision and actually got the arc a770. I'd be happy to discuss exactly why, and some of the pros and cons so far, but I do plan on eventually compiling a more in depth review somewhere sometime.

you are viewing a single comment's thread
view the rest of the comments
[–] neogeo@sh.itjust.works 2 points 1 year ago (1 children)

I'm new to ROCm and HIP, do you think that they'll improve over time? Does amd have an existing implementation for any cuda software or must developers port stuff over to ROCm? I ask this because most of my cuda software already runs ok ish on my 1050 ti so if I went amd it may provide reasonable performance with possible ROCm development in the future. Also you mentioned ai/ml and I'd actually really like to give tensorflow a try at some point. At the moment It seems that each gpu has features that are in development (nvk vs ROCm), and whichever I go with it sounds like i'll be crossing my fingers for each to mature at a reasonable time. At the moment I'm leaning nvida, because if nvk gains traction in a few years, It could provide a good open source alternative to switch to.

[–] cybersandwich@lemmy.world -1 points 1 year ago

They will definitely improve over time--if only because it couldn't possibly get worse. :)

Joking aside, they've made significant improvements even over the last few months. Tensorflow has a variant that supports rocm now. PyTorch does as well. Both of those happened in the last 6 months or so.

AMD says its prioritizing rocm (https://www.eetimes.com/rocm-is-amds-no-1-priority-exec-says/). But if you read the hackernews thread on that same article you'll see quite a few complaints and some skepticism.

The thing about CUDA is that it has over a decade of a headstart, and NVIDIA for all its warts has been actively supporting it for that entire time. So many things just work with nvidia and cuda that you'll have to cobble together and cross your fingers with ROCm. There is an entire ecosystem built around CUDA, so there are tools, forums, guides, etc all a quick web search away. That doesnt exists (yet) for ROCm.

To put it in perspective: I have a 6900xt (that I regretfully sold my 3070ti to buy). I spent a week just fighting with rocm to get it to work. It involved me editing some system files to trick it into thinking my Pop_os install was Ubuntu and carefully installed JUST the ROCm driver--since I still wanted to use the open source amd drivers for everything else. I finally got it working but NO libraries at the time supported it. So all of the online guides, tutorials, etc couldn't be used. The documentation is horrendous imo.

I actually got so annoyed I bought a used 1080ti to do the AI/ML work I needed to do. It took me 30 minutes to install it on a headless ubuntu server and get my code up and running. It's been working without issue for 6 months.