this post was submitted on 19 Nov 2024
228 points (91.0% liked)
Fediverse memes
442 readers
1 users here now
Memes about the Fediverse
- Be respectful
- Post on topic
- No bigotry or hate speech
Other relevant communities:
- !fediverse@lemmy.world
- !yepowertrippinbastards@lemmy.dbzer0.com
- !lemmydrama@lemmy.world
- !fediverselore@lemmy.ca
- !bestofthefediverse@lemmy.ca
- !de_ml@lemmy.blahaj.zone
- !fedigrow@lemm.ee
founded 3 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Interestingly enough, I feel like the current systems require mods/admins to keep an eye at all times, as harassment can happen at any time, and users can't really protect themselves.
There is a scenario which is exactly the opposite from the one you presented:
BlueSky just passed 21 millions users.
I had a look again at the post.
Would that be enough here? Of course, it depends on the topic of the thread (no link in the post, so I can't really see what they were talking about), but I'm pretty sure there would be more than 4 or 5 people who would call out about misinformation.
Can't we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?
In the scenario you suggested, a user who has blocked a harasser should no longer be aware of continued harassment by the harasser. Thus while the mods may have to step in, there is no particular urgency required. Also, a determined harasser will just alt-account no matter what the admins do, regardless of the blocking model used.
BlueSky isn't really comparable, since they have a user-user interaction model as compared to Reddit / Lemmy which have a community-based interaction model. In a sense every BS user is an admin for their own community.
Agreed. However, good faith users by nature tend to stick to their accounts instead of moving around (excepting the current churn b/c lemmy is new). Regardless of how many people would call out disinformation, it's ultimately not too difficult to block them all. It can even be easily automated since downvotes are public, meaning you could do this not just to vocal users fighting disinformation but anybody who even disagrees with you in the first place. An echo chamber could literally be created that's invisible to everyone but server admins.
We could, but again, good faith users tend not to be browsing while logged out. They have little reason to do so, while bad faith users have every reason to.
We could say that every user can mod their own threads.
The way Reddit does it at the moment still allows good faith users to identify such behaviours: it shows [unavailable] when someone who blocked you comments, so you know you just have to open that link in a private tab to see the content. I actually have that at the moment as some right wing user blocked me as I would usually call out their bullshit. Still allows me to see their comments and post them to a meta community to call out their right wing sub.