Maybe this will lead to a future where Stochastic Terrorism isn't a protected activity?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
As much as I believe it is a breeding ground for right wing extremism, it's a little strange that 4chan is being lumped in with these other sites for a suit like this. As far as I know, 4chan just promotes topics based on the number of people posting to it, and otherwise doesn't employ an algorithm at all. Kind of a different beast to the others, who have active algorithms trying to drive engagement at any cost.
Can we stop letting the actions of a few bad people be used to curtail our freedom on platforms we all use.
I don't want the internet to end up being policed by corporate AIs and poorly implemented bots (looking at you auto-mod).
The internet is already a husk of what it used to be, what it could be. It used to be personal, customisable... Dare I say it; messy and human...
.... maybe that was serving a need that now people feel alienated from. Now we live as corporate avatars who risk being banned every time we comment anywhere.
It's tiresome.
Are the platforms guilty or are the users that supplied the radicalized content guilty? Last I checked, most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves.