this post was submitted on 20 Mar 2024
1012 points (98.0% liked)

Technology

59377 readers
2961 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] 0x0@programming.dev 12 points 8 months ago (4 children)

It's never the parents, is it?

[–] PoliticalAgitator@lemmy.world 8 points 8 months ago

You mean the "responsible gun owners" who don't properly secure their weapons from a child?

load more comments (3 replies)
[–] Canyon201@lemmy.world 12 points 8 months ago

Right in the IPO price!

[–] Binthinkin@kbin.social 12 points 8 months ago

Goddamn right they do. Meta should be sued to death for the genocides too.

[–] TropicalDingdong@lemmy.world 11 points 7 months ago (1 children)

I don't understand how a social media company can face liability in this circumstance but a weapons manufacturer doesn't.

load more comments (1 replies)
[–] UsernamesAreDifficult@lemmy.dbzer0.com 9 points 8 months ago* (last edited 8 months ago)

Honestly, good, they should be held accountable and I hope they will be. They shouldn't be offering extremist content recommendations in the first place.

[–] whoreticulture@lemmy.world 9 points 8 months ago (1 children)

How does Lemmy pick which articles are at the top of my feed? Does anyone know how All is sorted?

load more comments (1 replies)
[–] atrielienz@lemmy.world 7 points 8 months ago (2 children)

So, I can see a lot of problems with this. Specifically the same problems that the public and regulating bodies face when deciding to keep or overturn section 230. Free speech isn't necessarily what I'm worried about here. Mostly because it is already agreed that free speech is a construct that only the government is actually beholden to. Message boards have and will continue to censor content as they see fit.

Section 230 basically stipulates that companies that provide online forums (Meta, Alphabet, 4Chan etc) are not liable for the content that their users post. And part of the reason it works is because these companies adhere to strict guidelines in regards to content and most importantly moderation.

Section 230(c)(2) further provides "Good Samaritan" protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."

Reddit, Facebook, 4Chan et all do have rules and regulations they require their users to follow in order to post. And for the most part the communities on these platforms are self policing. There just aren't enough paid moderators to make it work otherwise.

That being said, the real problem is that this really kind of indirectly challenges section 230. Mostly because it very barely skirts around whether the relevant platforms can themselves be considered publishers, or at all responsible for the content the users post and very much attacks how users are presented with content to keep them engaged via algorithms (which is directly how they make their money).

Even if the lawsuits fail, this will still be problematic. It could lead to draconian moderation of what can be posted and by whom. So now all race related topics regardless of whether they include hate speech could be censored for example. Politics? Censored. The discussion of potential new laws? Censored.

But I think it will be worse than that. The algorithm is what makes the ad space these companies sell so valuable. And this is a direct attack on that. We lack the consumer privacy protections to protect the public from this eventuality. If the ad space isn't valuable the data will be. And there's nothing stopping these companies from selling user data. Some of them already do. What these apps do in the background is already pretty invasive. This could lead to a furthering of that invasive scraping of data. I don't like that.

That being said there is a point I agree with. These companies literally do make their algorithm addictive and it absolutely will push content at users. If that content is of an objectionable nature, so long as it isn't outright illegal, these companies do not care. Because they do gain from it monetarily.

What we actually need is data privacy protections. Holding these companies accountable for their algorithms is a good idea. But I don't agree that this is the way to do that constructively. It would be better to flesh out 230 as a living document that can change with the times. Because when it was written the Internet landscape was just different.

What I would like to see is for platforms to moderate content posted and representing itself as fact. We don't see that nearly enough on places like reddit. Users can post anything as fact and the echo chambers will rally around it if they believe it. It's not really incredibly difficult to radicalise a person. But the platforms aren't doing that on purpose. The other users are, and the algorithms are helping them.

load more comments (2 replies)
[–] Embarrassingskidmark@lemmy.world 7 points 8 months ago (8 children)

The trifecta of evil. Especially Reddit, fuck Reddit... Facebook too.

load more comments (8 replies)
[–] FiniteBanjo@lemmy.today 6 points 8 months ago

Maybe this will lead to a future where Stochastic Terrorism isn't a protected activity?

load more comments
view more: ‹ prev next ›