this post was submitted on 20 Mar 2024
1012 points (98.0% liked)
Technology
59428 readers
3150 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think the design of media products around maximally addictive individually targeted algorithms in combination with content the platform does not control and isn't responsible for is dangerous. Such an algorithm will find the people most susceptible to everything from racist conspiracy theories to eating disorder content and show them more of that. Attempts to moderate away the worst examples of it just result in people making variations that don't technically violate the rules.
With that said, laws made and legal precedents set in response to tragedies are often ill-considered, and I don't like this case. I especially don't like that it includes Reddit, which was not using that type of individualized algorithm to my knowledge.
The problem then becomes if the clearly defined rules aren't enough, then the people that run these sites need to start making individual judgment calls based on...well, their gut, really. And that creates a lot of issues if the site in question could be held accountable for making a poor call or overlooking something.
The threat of legal repercussions hanging over them is going to make them default to the most strict actions, and that's kind of a problem if there isn't a clear definition of what things need to be actioned against.
It's the chilling effect they use in China, don't make it clear what will get you in trouble and then people are too scared to say anything
Just another group looking to control expression by the back door
There's nothing ambiguous about this. Give me a break. We're demanding that social media companies stop deliberately driving negativity and extremism to get clicks. This has fuck all to do with free speech. What they're doing isn't "free speech", it's mass manipulation, and it's very deliberate. And it isn't disclosed to users at any point, which also makes it fraudulent.
It's incredibly ironic that you're accusing people of an effort to control expression when that's literally what social media has been doing since the beginning. They're the ones trying to turn the world into a dystopia, not the other way around.