this post was submitted on 28 Oct 2024
20 points (100.0% liked)

Hardware

122 readers
20 users here now

A community for news and discussion about the hardware side of technology.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 3 months ago
MODERATORS
 

The core concept behind SOM is using unique chalcogenide materials that perform double duty as both the memory cell and the selector device. In traditional phase-change or resistive RAM, you need a separate component, like a transistor, to act as the selector to activate each cell. Conversely, the chalcogenide material in SOM switches between conductive and resistive states to store data.

you are viewing a single comment's thread
view the rest of the comments
[–] sxan@midwest.social 9 points 3 weeks ago (1 children)

This would drive the first significant changes in OS architecture in decades. Up until now, OSes have largely relied on reboots to fix bitrot in system operations. What do you do when there is no more "rebooting," when system state persists across power cycles? The first solutions will probably be intentionally wiping blocks during first initializations, but I think this might actually drive more adoption of micro-kernel architectures, where the core is proved solid against state corruption, and everything else can be hot-reloaded when errors stack up enough to cause a core dump.

TL;DR: Right now, when Linux crashes, it's usually because some subsystem becomes corrupt, and Linux has very few ways of clearing these; think zombie processes, which can only be cleared by a reboot. With persistent memory, Linux will have to get better about this, or else a micro-kernel will have a chance to gain the upper hand.

Not for the first time do I wish someone with Linus' motivational and organizational skills would pick up MINIX and take it over the finish (Finnish?) line; it's so annoyingly close. But maybe Redox will get there.

[–] kSPvhmTOlwvMd7Y7E@programming.dev 1 points 3 weeks ago (1 children)

I would say Redox is miles ahead of Minix, am I wrong?

[–] sxan@midwest.social 2 points 2 weeks ago

I don't know. Probably? It's certainly currently far more active. Last time I looked at it, it wasn't suitable for running in bare metal, which MINIX is. But I haven't looked in a few months, so it could be now.

I think virtual memory was the big blocking issue in MINIX, although my current desktop has way more memory than I'll ever use, so maybe it's worth looking at again. Still, it never had the contributor support to advance it beyond a teaching tool, and Redox appears to be farther along here.

There are some tools I won't do without, and that's probably my biggest blocker. But, yeah - I have high hopes for Redox. I wish they wouldn't focus on bespoke windowing and just adopt Wayland, or X even (although the latter would be an odd choice). Seems like a lot of work going into something that's very limiting, and it smells a lot like a terminal case of NIH syndrome. I'll admit I'm not following the project closely, though; I could be wrong.