this post was submitted on 12 Feb 2025
73 points (98.7% liked)
Linux
49902 readers
1142 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The only reason I still go Nvidia is because I self host AI, which afaik takes advantage of CUDA and just runs overall better on Nvidia cards, or at the very least is easier to set up. Really, the top reason is that it's the devil I know right now.
If I didn't self host AI, I would 100% go AMD. Especially if you don't want to use proprietary drivers. That being said, my old gaming laptop runs NixOS with Nouveau and there have definitely been improvements since I first tried it years ago, but I don't do much gaming on it. It's more a TV media station these days (so I can avoid the stupid smart TV bloat agenda, where your TV gets gradually slower and fits less increasingly-bloating apps over time).
If it's just about self-hosting and not training, ROCm works perfectly fine for that. I self-host DeepSeek R1 32b and FLUX.1-dev on my 7900 XTX.
You even get more VRAM for cheaper.
This is very good to know. I read that ROCm can be a pain to get up and running, but I read that months ago and this space is moving fast. I may switch over when I can if this is the case. My 3080 is feeling it's age already. Thank you!
That used to be the case, yes.
Alpaca pretty much allows running LLM out of the box on AMD after installing the ROCm addon in Discover/Software. LM Studio also works perfectly.
Image generation is a little bit more complicated. ComfyUI supports AMD when all ROCm dependencies are installed and the PyTorch version is swapped for the AMD version.
However, ComfyUI provides no builds for Linux or AMD right now and you have to build it yourself. I currently use a simple Docker container for ComfyUI which just takes the AMD ROCm image and installs ComfyUI ontop.
Definitely bookmarking this reply. I haven't tried ComfyUI yet, but I've had it starred on Github from back when it was fairly new. I'm no stranger to building from source, but I have not dived into Docker yet, which is becoming more and more of a weakness by the day. Docker is sometimes required by some really cool projects and I'm missing out.