this post was submitted on 08 Apr 2024
190 points (96.6% liked)

Linux

48209 readers
1327 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] cbarrick@lemmy.world 43 points 7 months ago (2 children)

Unfortunately, those of us doing scientific compute don't have a real alternative.

ROCm just isn't as widely supported as CUDA, and neither is Vulkan for GPGPU use cases.

AMD dropped the ball on GPGPU, and Nvidia is eating their lunch. Linux desktop users be damned.

[–] TropicalDingdong@lemmy.world 10 points 7 months ago (1 children)

yep yep and yep.

and they've been eating their lunch so long at this point I've given up on that changing.

The new world stands in cuda and that's just the way it is. I don't really want an nVidia, radeon seems far better for price to performance . Except I can justify an nVidia for work.

I can't justify a radeon for work.

[–] cbarrick@lemmy.world 11 points 7 months ago (2 children)

Long term, I expect Vulkan to be the replacement to CUDA. ROCm isn't going anywhere...

We just need fundamental Vulkan libraries to be developed that can replace the CUDA equivalents.

  • cuFFT -> vkFFT (this definitely exists)
  • cuBLAS -> vkBLAS (is anyone working on this?)
  • cuDNN -> vkDNN (this definitely doesn't exist)

At that point, adding Vulkan support to XLA (Jax and TensorFlow) or ATen (PyTorch) wouldn't be that difficult.

[–] DarkenLM@kbin.social 18 points 7 months ago

wouldn't be that difficult.

The amount of times I said that only to be quickly proven wrong by the fundamental forces of existence is the reason that's going to be written on my tombstone.

[–] TropicalDingdong@lemmy.world 3 points 7 months ago (1 children)

I think. it's just path stickiness at this point. CUDA works and then you can ignore it's existence and do the thing you actually care about. ML in the pre CUDA days was painful. CUDA makes it not painful. Asking people to return to painfully..

Good luck..

[–] cbarrick@lemmy.world 8 points 7 months ago* (last edited 7 months ago) (1 children)

Yeah, but I want both GPU compute and Wayland for my desktop.

[–] ManniSturgis@lemmy.zip 2 points 7 months ago

Hybrid graphics. Works for me.

[–] urbanxs@lemmy.ml 4 points 7 months ago (1 children)

I find it eerly odd how amd seems to almost intetionally stay out nvidia’s way in terms of cuda and couple other things. I dont wish to speculate but considering how ai is having a blowout yet AMD is basically not even trying, it feels as if the nvidia ceo beying cousins with amd’s ceo has something to do with it. Maybe i am reading too much into it but there’s something going on. Why would amd leave so much money on the table?

Bubbles tend to pop sometimes.