this post was submitted on 19 Nov 2024
228 points (91.0% liked)
Fediverse memes
442 readers
1 users here now
Memes about the Fediverse
- Be respectful
- Post on topic
- No bigotry or hate speech
Other relevant communities:
- !fediverse@lemmy.world
- !yepowertrippinbastards@lemmy.dbzer0.com
- !lemmydrama@lemmy.world
- !fediverselore@lemmy.ca
- !bestofthefediverse@lemmy.ca
- !de_ml@lemmy.blahaj.zone
- !fedigrow@lemm.ee
founded 3 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Reddit implemented this, and it was abused heavily to push trolls posts and disinformation up the algorithm, since by blocking people who disagreed with them, after multiple attempts the naysayers could no longer see the posts.
Somebody tested it, and was able to get their testing misinformation posts heavily upvoted after just a few days.
https://www.reddit.com/r/TheoryOfReddit/comments/sdcsx3/testing_reddits_new_block_feature_and_its_effects/
Has happened multiple times to me. I called somebody out for saying something wrong or bigoted or whatever, they blocked me after responding to me, I could no longer respond back to their response. And then presumably they kept saying shit that I was not able to see because I was blocked
It's a short-sighted way of implementing blocking, since it allows for heavy abuse by bad actors
Yeah, there was plenty of discussion on Reddit back in the day about the drawbacks and pitfalls of the blocking system. Surprised to see people calling for its implementation here.
Is there a long-sighted way to implement blocking?
Not when it takes one minute to create a new account
@Blaze@feddit.org, genuinely interested in your opinion on this considering the new information
Do you really believe that someone could get their a misinformation post heavily upvoted here? The main differences with Reddit are
If someone would do something similar here, they would at the very least get called out on !fediverselore@lemmy.ca or !yepowertrippinbastards@lemmy.dbzer0.com , and mods and admins would get called out to act on those. Reddit does not have such mechanisms.
I disagree with you to some extent.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
Interestingly enough, I feel like the current systems require mods/admins to keep an eye at all times, as harassment can happen at any time, and users can't really protect themselves.
There is a scenario which is exactly the opposite from the one you presented:
BlueSky just passed 21 millions users.
I had a look again at the post.
Would that be enough here? Of course, it depends on the topic of the thread (no link in the post, so I can't really see what they were talking about), but I'm pretty sure there would be more than 4 or 5 people who would call out about misinformation.
Can't we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?
In the scenario you suggested, a user who has blocked a harasser should no longer be aware of continued harassment by the harasser. Thus while the mods may have to step in, there is no particular urgency required. Also, a determined harasser will just alt-account no matter what the admins do, regardless of the blocking model used.
BlueSky isn't really comparable, since they have a user-user interaction model as compared to Reddit / Lemmy which have a community-based interaction model. In a sense every BS user is an admin for their own community.
Agreed. However, good faith users by nature tend to stick to their accounts instead of moving around (excepting the current churn b/c lemmy is new). Regardless of how many people would call out disinformation, it's ultimately not too difficult to block them all. It can even be easily automated since downvotes are public, meaning you could do this not just to vocal users fighting disinformation but anybody who even disagrees with you in the first place. An echo chamber could literally be created that's invisible to everyone but server admins.
We could, but again, good faith users tend not to be browsing while logged out. They have little reason to do so, while bad faith users have every reason to.
We could say that every user can mod their own threads.
The way Reddit does it at the moment still allows good faith users to identify such behaviours: it shows [unavailable] when someone who blocked you comments, so you know you just have to open that link in a private tab to see the content. I actually have that at the moment as some right wing user blocked me as I would usually call out their bullshit. Still allows me to see their comments and post them to a meta community to call out their right wing sub.