this post was submitted on 11 Jul 2023
31 points (94.3% liked)

Linux

48224 readers
693 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Hello! I'm thinking about switching from my beloved fedora to a rolling release distro, because it really intrigues me, but I'm a bit scared of Arch, it's still too soon for me to go down this rabbit hole XD
what do you think about debian testing? It's not a "true" rolling release as long as I understand, but it "practically" behaves like one, correct? On the system informations I still see Debian 12, what will happen when Debian 13 stable will be released?

sorry if these are silly questions and thanks to all in advance!

you are viewing a single comment's thread
view the rest of the comments
[–] Raphael@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Some points I handpicked for you. Ignore the uninformed masses.

The Debian GNU/Linux FAQ

Chapter 3 Choosing a Debian distribution

3.1 Which Debian distribution (stable/testing/unstable) is better for me?

If you are a desktop user with a lot of experience in the operating system and do not mind facing the odd bug now and then, or even full system breakage, use unstable. It has all the latest and greatest software, and bugs are usually fixed swiftly.

3.1.5 Could you tell me whether to install stable, testing or unstable?

Testing has more up-to-date software than Stable, and it breaks less often than Unstable. But when it breaks, it might take a long time for things to get rectified. Sometimes this could be days and it could be months at times. It also does not have permanent security support.

Unstable has the latest software and changes a lot. Consequently, it can break at any point. However, fixes get rectified in many occasions in a couple of days and it always has the latest releases of software packaged for Debian.

3.1.6 You are talking about testing being broken. What do you mean by that?

Sometimes, a package might not be installable through package management tools. Sometimes, a package might not be available at all, maybe it was (temporarily) removed due to bugs or unmet dependencies. Sometimes, a package installs but does not behave in the proper way. When these things happen, the distribution is said to be broken (at least for this package).

3.1.7 Why is it that testing could be broken for months? Won’t the fixes introduced in unstable flow directly down into testing?

The bug fixes and improvements introduced in the unstable distribution trickle down to testing after a certain number of days. Let’s say this threshold is 5 days. The packages in unstable go into testing only when there are no RC-bugs reported against them. If there is a RC-bug filed against a package in unstable, it will not go into testing after the 5 days. The idea is that, if the package has any problems, it would be discovered by people using unstable and will be fixed before it enters testing. This keeps testing in a usable state for most of the time. Overall a brilliant concept, if you ask me. But things aren’t always that simple.

tldr: "The goal of the Debian project is to produce Stable". Sid is essentially a rolling release, it's nothing like Fedora Rawhide in the old days when was essentially a testbed for Red Hat but it's not meant to be as smooth as, let's say, Arch or Tumbleweed. Testing in the other hand isn't merely some layer between Unstable and Stable, it's part of a bigger project, Testing exists for the sake of Stable, not for the sake of Testing. Same logic applies to Unstable but you do achieve some level of "just works" when you're just pushing all the latest software, after all Debian also has Experimental but you should still expect breakage when something truly major happens.

[–] TunaCowboy@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (2 children)

Ignore the uninformed masses.

Although this is useful information, gratuitous displays of hubris are gross. You should do yourself a favor and keep reading - it is clear that the decision should be up to the user after careful consideration.

All of the issues in regard to testing have well known mitigations which are trivial to implement. You can find this information and the corresponding links here

It is a good idea to include unstable and experimental in your apt sources so that you have access to newer packages when needed. With the APT::Default-Release apt config setting or with apt pinning you can have packages from testing by default but if you manually upgrade some packages to unstable or experimental, then you will get upgrades within that suite until those packages migrate down to unstable or testing. The apt pinning needs priorities lower than 990 and equal to or higher than 500 for this to work nicely. You can also pin some packages to unstable/experimental that you always want the latest versions of.

It is a good idea to install security updates from unstable since they take extra time to reach testing and the security team only releases updates to unstable. If you have unstable in your apt sources but pinned lower than testing, you can automatically add temporary pinning for packages with security issues fixed in unstable using the output of debsecan.