this post was submitted on 06 May 2024
40 points (93.5% liked)

Linux

48186 readers
1427 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I looked this up before buying the GPU, and I read that it should "just work" on Debian stable (Bookworm, 12). Well, it doesn't "just work" for me. :(

clinfo returns two fatal errors:

fatal error: cannot open file '/usr/lib/clc/gfx1100-amdgcn-mesa-mesa3d.bc': No such file or directory

fatal error: cannot open file '/usr/lib/clc/gfx1030-amdgcn-mesa-mesa3d.bc': No such file or directory

I get similar errors when trying to run OpenCL-based programs.

I'm running a backported kernel, 6.6.13, and the latest Bookworm-supported mesa-opencl-icd, 22.3.6. From what I've found online, this should work, though Mesa 23.x is recommended. Is it safe/sane to install Mesa from Debian Trixie (testing)?

I've also seen references to AMD's official proprietary drivers. They do not officially support Debian, but can/should I run the Ubuntu installer anyway?

I'm hoping to get this up and running without any drastic measures like distro hopping. That said, if "upgrade to Testing or Unstable" is the simplest approach, I am willing to entertain the idea.

Thanks in advance for any help you can offer.

you are viewing a single comment's thread
view the rest of the comments
[–] TerraRoot@sh.itjust.works 1 points 6 months ago (1 children)

I'm still running rx570, so I'm no real help, but +1 for using debian testing, been daily driving it for years on my gaming desktop. stable for server's and hardware that isn't booted up daily.

[–] Shareni@programming.dev 0 points 6 months ago (2 children)

for using debian testing, been daily driving it for years on my gaming desktop. stable for server’s and hardware that isn’t booted up daily.

Why even use debian at that point?

Half of all of my packages are from nix unstable, but the system itself is still debian stable. That means I've got the bleeding edge user packages, but my system always boots. Casuals can use flatpak instead.

The only downside is for bleeding edge hardware, but again, why use debian at that point.

[–] TerraRoot@sh.itjust.works 4 points 6 months ago

Because I've been using an apt-get based distro since the late 90's, Because I work in IT, Because I don't like rice/hours of config/features. Yawning chasm of difference between always boots and always boots and dive right into work/game/browsing/whatev's

[–] hersh@literature.cafe 1 points 6 months ago (1 children)

Can you explain more about your workflow? Do the Nix packages have their own isolated dependency resolution? How does it work when Debian packages depend on a library you get from Nix, or vice-versa?

[–] Shareni@programming.dev 2 points 6 months ago

Can you explain more about your workflow?

Here's an example. The main difference to my current setup is that I'm installing nixGL through nix-channels because then I don't have to use --impure that way, although I still haven't gotten around to automating its usage so that might still change.

Basically I just have list of packages that I want installed (home.nix), and I run updates a couple of times a week. If something breaks (it hasn't yet), I could just roll back to a previous generation.

Do the Nix packages have their own isolated dependency resolution?

Each package has specified dependencies, nix downloads them separately and then symlinks them in order for the package to access it. If two packages require the same version of the dependency, based on the hash of the output, they'll each get a symlink of the same dependency. If they require different versions, it will download the correct ones for each of the packages.

That way you're theoretically never get mismatched dependencies, but it uses a bit more space.