Lemmy Moderation Tools

294 readers
1 users here now

Welcome

I'm working on a moderation tool to work with Lemmy.

I'm still in early development and discovery. This channel will update the status and respond to questions during development, testing, release, and post-release.

You are encouraged to create posts defining your needs. I also appreciate feedback on status updates. This helps me maintain the right track.

Join us on Matrix!

founded 2 years ago
MODERATORS
1
7
Feedback from all moderators (self.lemmy_mod_tools)
submitted 8 months ago by jgrim to c/lemmy_mod_tools
 
 

cross-posted from: https://discuss.online/post/6776820

The Sublinks team has written up a little survey, which we feel is both thorough and inclusive. It covers a wide range of topics, such as user privacy, and community engagement, along with trying to gauge things that are difficult when moderating.

Also please be aware the information collected by this survey is completely anonymous. As many of us in the social sciences background know, if you want the REAL feelings of individuals, they need to feel safe to express themselves.

👉Moderation Survey HERE👈

Please feel free to comment in this thread, we will do our best to respond to any genuine questions.

We look forward to hearing from each and every one of you!

Sincerely, The Sublinks Team

2
 
 

cross-posted from: https://sh.itjust.works/post/8365829

Question: What moderation tools do you find most useful?

Follow up question: Are there any moderation tools you wish existed but don't?

My wish would be some form of content editable by multiple accounts, useful for megathreads or community wikis.

3
 
 

I just posted about this on !fediverse@lemmy.world, but figured 'd share it here as well. :)

I'm the author of lemmyverse.net and I've recently been working on a new moderation tool called Lemmy Modder. https://modder.lemmyverse.net/

Currently, it supports user registration/approvals and content report management. I offer it either as a hosted app (which is currently only compatible with Lemmy 0.18 instances) or a package that you can run alongside your Lemmy instance (using Docker-Compose)

Feel free to give it a go and send any feedback my way :) https://github.com/tgxn/lemmy-modder

Edit for a note: This tool does not save, proxy or store any of your user credentials or data to me, it is only ever stored locally in your browser. I also do not use any website tracking tools. 👍

4
12
I'm working on a PHP SDK (self.lemmy_mod_tools)
submitted 1 year ago by jgrim to c/lemmy_mod_tools
 
 

I'm building a PHP SDK for the Lemmy API.

Here is the repo: https://github.com/jgrim/lemmy-sdk Here is the packagist: https://packagist.org/packages/jgrim/lemmy-sdk

It's still in early development. It works; however, it's missing some tests, CI/CD tooling, and examples.

Feel free to use, contribute, or ask questions.

5
 
 

I decided to provide my open tasks to a Kanban board rather than many update posts. I'll make larger announcements as I make headway, but you can view the Kanban board for due dates and progress. As I work on it, I'll add more details to the board, issues, wiki, etc.

Key takeaways from this post:

  • Updates should be easier to pull than waiting for me to announce.
  • Anyone can register to comment or add issues.
  • This should give you an idea of what I've been doing and what's next.
  • I had gotten distracted from the bots' only initial goal and added many more features.
  • Everything is near completion but took longer; the first release will be bigger but more delayed.
  • I think I added everything I'm doing there. It might change as I realize what I've done or plan to do. I could also add missing details on some tickets if requested.
  • The code from socialcare.cloud will be closed source. The FOSS solution is coming later.

The Kanban board can be found here: https://track.gr.im/agiles/145-2/current

6
 
 

I spent about 10 minutes creating a simple landing page for socialcare.cloud. This will sit there as I finalize some of the user interfaces. It lists a few features.

As I get closer to being ready for release, I'll post more details. I've very close to the first release.

I also created the Mastodon account @SocialCareCloud@utter.online & Matrix Space.

I'm hoping to get the first release out by this weekend. The second by next weekend.

7
 
 

Would you rather install a browser extension to expand moderation tools of Lemmy or have a dedicated website to do moderation in?

I've been reviewing the Reddit toolbox and believe I can create something similar to work with Lemmy using a mix of storage techniques.

The other option is to pull full posts, comments, and history into SocialCare and recreate the Lemmy UI with added features that are embedded into the pages naturally rather than through extension.

I've also uploaded the Icon of SocialCare.cloud to this post as a preview.

Completed tasks:

  • I have the Lemmy integration complete
  • I have the design and UI of the admin complete
  • I have job scheduling working & fetching data from instances

What's next:

  • Build the bot configuration
  • The first bot will be a post scheduler
  • Release previous features
  • The ability to create notes for users and communities
  • Release
  • Automod features
  • etc.

More updates to come. Please, let me know what you think!

-Jason

8
8
Update on progress (self.lemmy_mod_tools)
submitted 1 year ago by jgrim to c/lemmy_mod_tools
 
 

I've decided to keep the first version simple in an effort to get it live ASAP.

I've decided to develop a cloud solution on the domain socialcare.cloud. I'm writing it in PHP using the Laravel framework.

I should be able to get the base features done within a couple of weeks. I've already begun development.

Once the service is running I'll be pursuing self-hosted options. Perhaps, in the original tech stack of Rust/Svlete/Postgres. Distributed software takes a lot more time than self-hosted and I'd like to get live asap.

I'll opensource what I can. For example, there isn't a Lemmy client SDK available for PHP. I created my own and it will be opensource with an MIT license.

I'll report progress as it's made. I hope everyone agrees to my decision to keep it lean and targeted.

9
 
 

So, never thought i would suggest a form of crypto or token, but it struck me that one of the best ways to help people pay for hosting services would be to create a de-centralized crypto token that when awarded to a user it is send on the back end to their instance.

If we put this token on some of the exchanges then people could pay with either real money by buying tokens and gifting/awarding them, or converting other crypto into Fediverse tokens.

We could even use the same blockchain to allow minting of one off or limited run tokens where users can create their own.

Would also be cool if awards attach to the profile and are federated, but with the ability to make tokens private by the user.

Just a thought, because someone wanted to give me gold for a post, and I linked to our donation page, but man it would be easy if they could have just purchased a token and sent it right then and there without having to go through all the different instance donation methods.

Sadly, I am not a programmer so hopefully this idea is worth a shit and one of you smart people takes it and runs.

10
10
submitted 2 years ago* (last edited 2 years ago) by jgrim to c/lemmy_mod_tools
 
 

Hello,

I've been reading over all the feedback provided by the community. This is an update on progress while also asking some questions to the community.

I'm currently working on the Detailed Design of the project. This is a document that serves two purposes:

  1. Ensures what I plan to build is what is expected
  2. It forces me to think through the project in detail to find any holes early on

While working on this document, I realized it would be great to separate additional contributions the community can make that cannot relate to this project.

I am going to define the following feature sets:

  • In the scope of this project
  • Which parts must be a core enhancement
  • Out of the scope of this project
  • Finally, the requests that require significant design decisions today

In-Scope Features

There is a clear set of features that will be included. These are must-haves for this project to be a success.

  1. Roles & Responsibilities
    1. The ability to create super admins, moderators of the entire site, single communities, and/or just application reviews.
  2. Smart mod queue ranking
    1. The ability to rank local or external posts higher.
    2. The ability to use rank posts by how well the OP behaves (has moderation strides against them already, etc.)
    3. Certain words, phrases, or domains will rank reports higher. These words can be used to also auto-report posts.
  3. A way to resolve reports by sending and recording a warning to the user or suspension by time or permanently.
  4. Private notes for local moderators on posts, accounts, comments, users, etc.
  5. The ability to search & filter the mod log by the user, posts, comment, community, local or remote instance.
  6. Statistics to help find:
    1. Overactive users that like, comment, post
    2. Retention
    3. Report details
      1. Open reports
      2. Resolved
      3. Response time
    4. Totals on users, communities, comments, etc.
    5. Likes of posts & comments over time
  7. List of posts from users of your community to other communities. For general scanning of busy communities.
  8. List of comments from users of your community to other communities. For general scanning of busy communities.

Must be core enhancements

Several requests are changes to the core of Lemmy and cannot be accomplished by an external tool.

  1. Restricting content from federated instances from pulling locally. Federation is all or nothing. It would be a terrible user experience for a post to be on the main instance but never able to be pulled locally without some notification that the content broke local rules.
  2. Restricting federation in any form.
  3. Forcing federated communities and/or instances to moderate your instance rules.
  4. Blocking federated users from local communities in any way.

Out of scope

Some features I just cannot fit within this project's scope or help push the need for them to be added to the core. If these are important to you, please, work with the core team to add them.

  1. Customer service tasks
    1. Password reset (site handles this)
    2. Changing user account details
  2. Federation tools
    1. Most federation-related requests will require major changes to the core and should be done as core contributions.
    2. Limiting which posts show or are hidden from the feed.
  3. User tracking across instances.
    1. This feature would require a single instance, a central database to track, and/or enhancements to the federation logic within Lemmy.
    2. This includes preventing certain users from posting in a community.
  4. Any bugs with Lemmy

Finally, a design decision

Some features will require some design decisions to be possible. Mostly, the ability for more joint moderation between instances. I'll list the features below:

  1. Strike count across instances
  2. Risk ranking of users across instances
  3. Common SPAM detection
  4. Instance frustration detection (lots of strikes against its users, lots of removal of communities, etc.)
  5. Shared private notes on users/communities/posts, etc

These features will require some central moderation hub, which brings me back to one of the original questions. Do you want to self-host without central features, or do you want there to be a central site (like mods.socialcare.cloud) where you log in and interface with your instance?

Self-hosting brings additional costs. Some data will need to be stored for this moderation tool and scale depending on your site's traffic. So for site admins, here are the pros and cons of self-hosting versus cloud solution:

For cloud hosting

Pros

  1. Simple setup and zero to low cost
  2. Better communication between instance mods
  3. Quicker development 4. No need to devote development hours to installation scripts, guides, community support, etc.

Cons

  1. Admins cannot modify code directly
  2. Potential subscription cost (no monetization has been figured out to run the servers/development)

For self-hosting

Pros

  1. Admins can directly modify code rather than wait for updates or changes.

Cons

  1. Slower to develop
  2. Potentially complicated installation
  3. Might be less adopted by users

What's next?

I'm working on the Design Doc for what is In-Scope. These features will be built regardless. This will be done by tomorrow's EOD (July 2nd, 2023).

After that, I will break ground on development. The plan is to split some of the development into microservices. This should allow for parallel development for whoever wishes to contribute.

"What can I do?"

Please help me figure out what to do with the core enhancements. Please let me know if anyone wants to take them on and own them. Let me know if you'd prefer I create GitHub issues for the core team. I need you to let me know what you want to do or can do.

Please help me add to this pro/cons list for self-hosted vs. cloud-hosted, and let's decide.

Thanks, jgrim

11
3
submitted 2 years ago by jgrim to c/lemmy_mod_tools
 
 

I've still actively working on the design doc. I've had to prioritize some full-time work this week so there is not a lot of progress to show.

I've created the repositories to store the backend and front-end code: https://github.com/jgrim/socialcare https://github.com/jgrim/socialcare-ui

This is a great place to add suggestions and track progress. I plan to fully utilize GitHub's project tools to track.

Oh, and I named the product socialcare... I own the domain socialcare.cloud. Let me know if you disagree.

Thanks!

12
 
 

cross-posted from: https://lemmy.dbzer0.com/post/220288

New feature has been deployed on the Fediseer where it can autogenerate special .svg badges for your Fediverse domain which you can embed directly.

The images have an embedded link to the endpoints proving this, but that doesn't work in markdown, so when embedding in markdown, you need to put the link manually.

Guarantees

https://fediseer.com/api/v1/badges/guarantees/{domain}.svg

This badge will display which other fediverse domain guaranteed that your domain is not spam. Remember each instance can only have 1 guarantor due to the chain of trust.

Example:

[![](http://fediseer.com/api/v1/badges/guarantees/lemmy.dbzer0.com.svg)](https://fediseer.com/api/v1/whitelist/lemmy.dbzer0.com)`

Endorsements

https://fediseer.com/api/v1/badges/endorsements/{domain}.svg

This badge will provide a count of how many other fediverse domains endorsed for yours. An instance can guarantee another instance for any reason.

Example:

[![](http://fediseer.com/api/v1/badges/endorsements/lemmy.dbzer0.com.svg)](https://fediseer.com/api/v1/endorsements/lemmy.dbzer0.com)`

Display

You can place these anywhere you want on your site, but obvious suggestion is on the main sidebar. This will work for any domain known by the Fediseer. If your domain is not known, simply claim it and then find someone to guarantee for you.

13
 
 

cross-posted from: https://programming.dev/post/222740

I have seen a lot of calls around Lemmy for more moderation tools. I have been working on Lemmy PowerShell module for a few weeks now, and I went ahead and released a preview version with multiple moderation tools now available. The module has the ability to perform the following tasks using a simple command line tool:

  • Search posts and comments
  • Remove a post
  • Remove a comment
  • Lock and unlock posts
  • Add and remove moderators
  • Create new posts and comments

You can get started now by installing the module through the PowerShell gallery.

Install-Module Lemmy-preview
Import-Module Lemmy-preview

If you are not familiar with PowerShell, I've include detailed instruction in the GitHub repo with lots of example. https://github.com/mdowst/Lemmy-PowerShell

If you run into any issues please let me know either here or by submitting an Issue to the repo.

14
5
submitted 2 years ago* (last edited 2 years ago) by db0@lemmy.dbzer0.com to c/lemmy_mod_tools
 
 

cross-posted from: https://lemmy.dbzer0.com/post/185949

if you think the idea of the Fediseer is a good one, we could use your help!

If you have an instance, make sure you've claimed it. To claim it, you can use this curl command

DOMAIN=lemmy.dbzer0.com
ADMIN=db0
curl -X 'PUT' \
  'https://fediseer.com/api/v1/whitelist/${DOMAIN}' \
  -H 'accept: application/json' \
  -H 'Client-Agent: unknown:0:unknown' \
  -H 'Content-Type: application/json' \
  -d '{
  "admin": "${ADMIN}"
}'

In the above bash script, simple replace your DOMAIN and ADMIN with your own. If you're on windows, you can use git bash to run it.

Now you simply need to wait for someone to guarantee your instance. You can ask in this thread, or just look for other guaranteed instances which share your values and ask them. In fact if you pass a "guarantor": "domain.tld" key/value to the payload above, the admins of that instance will get a PM to guarantee for you!

Once you get your API key with a PM, you can then help us add more instance. If you know of any instances that are definitely not spam, simply use the below curl call to guarantee for them as your own instance. They don't have to be claimed yet.

APIKEY="abcdefsawadf"
DOMAIN="notspam.domain.tld"
curl -X 'PUT' \
  'https://fediseer.com/api/v1/guarantees/${DOMAIN}' \
  -H 'accept: application/json' \
  -H 'apikey: ${APIKEY}'

Alternatively you can use the API directly so you don't have to edit curl commands.

I hope soon we'll have a working GUI which will make this very painless.

15
 
 

cross-posted from: https://lemmy.dbzer0.com/post/163869

I have updated the content of this devlog to match the current workflow of the API, with the streamlined registration and claims

16
 
 

There's some heated discussion going on about a community dedicated to Donald Trump and, a few days ago, we started receiving concerning news about bot infested instances.

I believe there's three additional settings for instance administrators which Lemmy could implement in the future and it's probably worth discussing first. These would be :

  1. Block specific users or users from specific remote instances from creating new posts in local communities (or have them require approval).
  2. Block specific users or users from specific remote instances from writing comments to posts in local communities.
  3. Block specific users or users from specific remote instances from upvoting or downvoting posts and/or comments in local communities.

These options alone could allow concerned instance admins prevent brigading and content manipulation without necessarily defederating which should be left as a last-resort.

As a small instance admin myself I would be shooting myself in the foot if I were to block a very large instance, but maybe it would make sense to require approval for posts from certain remote users.

That said. One concern I have is that by having too many options we could make things much more confusing. But right now we have too few moderation options and I can't think of other ways to keep federation while also having a way to prevent brigading or content manipulation.

17
 
 

cross-posted from: https://lemmy.world/post/370342

I'm kicking around a few feature requests.

One of them I've already created in github as it seems appropriate to mainline Lemmy, but a couple of others I think are more appropriate to third party development. Since I'm more a product management / sysadmin type and not much of a coder, I'm putting these in the aether in case they drum up some interest in those considering features for bots or other tooling.

First is YouTube aggregation - it doesn't have to be limited to YT. I'm interested in the ability to automatically collect notifications from a list of channels (click that bell icon, baby) and generate a community post to link new videos.

Second is RSS aggregation. If a blog or magazine or news site has a feed, and if that feed should feature an entry matching keywords defined by a moderation team, generate a post to the community linking to that content.

If these capabilities exist already for Lemmy, even in a hackish way, please do let me know. Otherwise, these are things I am wishing for :)

18
 
 

Hello,

I'm still reviewing the technical limitations of building this tool.

I'm deciding on one of two paths from the original list. The devs will not have time to do this themselves. The community must act. You can find out why by reading their new blog post.

I had listed six options before. You can review them all on the post Initial thoughts.

Decision

I'm considering Option 1 and Option 4. I'll explain why not for some and why for these two options.

Why nots

  • Why not 2:
    • I've been looking for a project to become more than a side hustle. I believe making this self-hosted could not be monetized properly. My options are donations or charges for the software. Donations are too unpredictable, and charging for a tool to enhance an open-source project feels wrong and unsustainable. It would have to be a charge for the admins of fediverse products who may not be willing to spend money on this tool, especially as Lemmy builds out its features with time.
  • Why not 3:
    • I want to build something part of the fediverse, not just Lemmy. Doing this option makes it directly and strictly for Lemmy. There is the possibility of building additional adapter services; however, that may not be worth the effort.
  • Why not 5:
    • They have stated that they cannot take on any community-requested features. They are working through bugs.
  • Why not 6;
    • Same as "Why no 3", I want to contribute to the fediverse, not just a single application.
    • Moderation will need a lot of information and tools to succeed. I don't believe the current Lemmy codebase and database structure are ripe for such a large enhancement.

Why yes

I'll now clarify why I am interested in options 1 and 4.

  • Why 1
    • This would become a potential business. It could be monetized with monthly fees and add-on products. It would be much easier to keep costs aligned with the size of the community wishing to use it.
    • I could potentially make this full-time if it grows enough.
  • Why 4
    • I'm calling this the WordPress model. WordPress is completely free to download and install. However, you can pay them to host and maintain it for you. This is much easier for most people. Following this same pattern, I could offer managed hosting for Lemmy and fediverse moderation tools as a subscription.
    • It gives people a choice. It's not as greedy a feeling as making a closed-source SaaS. However, this does add complexity to the development. I would probably release the SaaS solution before self-hosted to work out issues.
    • I could open-source it and get help from the community.

Why no to yeses

Options 1 and 4 also have their downsides:

  • Why not 1
    • People may not pay for it. A lot of hobbies have been creating their servers. There may be only a handful of large enough servers that would need such a service.
    • I'd be counting on the growth of the fediverse and a need for centralized and smart moderation.
    • Closed source code. This is against the fediverse and FOSS.
  • Why not 4
    • I don't want to become a product manager. It would be much more efficient for me to develop from the start independently. Then lean on the community for enhancements and fixes. Once it's open, I have to manage a lot of distractions before it's ready for ever-watching eyes.

What about the roadmap and timeline?

I'm still working through the feature set. I think there are some clear winners in what people need. The first release would depend on the model chosen. A feature-rich product would require some financial incentive for me to get it out quicker. It would be difficult to justify hours of my time away from my family to my wife for something that doesn't benefit her or our son.

Next steps

I need to read a bit more into ActivtyPub. I need a better understanding of some things before a path is chosen. I'm also working through the feature requests posted by others. I will create a list of base features and try to release enhancements on a roadmap. I'll next work on a detailed design doc to ensure I don't mix anything. Finally, get coding.

I don't have dates for these items yet. It's been a busy Father's Day weekend. I've also been under the weather for a few days now too. I'll keep you all posted. Please provide feedback here or within the Matrix channel; I will try to watch all the channels there.

19
 
 

cross-posted from: https://vlemmy.net/post/63994

Hi all, thanks so much for your input on this post in regards to de-federation. I have read through every comment and have taken all perspectives on board. I think I have come up with my stance on the matter for the moment.

For now I wont be de-federating with any other instances or servers. I feel it is important as many people have mentioned to "leave the power to the users" and let them decide the content they wish to see. In regards to this I think it will be extremely important to educate people on how to block communities that they do not wish to see. If you all could assist me with that and help guide new users that would be much appreciated. I will be creating a post on this soon and will put a note in the sidebar about it.

I also am aware and agree that certain content has no place in peoples feeds and that they want nothing to do with it. I myself am one of those people. Therefore there are a few things that I will still strongly be pushing for.

Site admins are currently discussing what moderation tools that we think we need. And this is what I propose and hope to implement into VLemmy.

A mod/admin tool that allows me to block certain communities/servers from appearing in all/new/hot feeds while they will still show for users that are subscribed. This stops the unwanted propagation of harmful/unwanted content while not restricting access to it and allowing users to interact if they wish.

A user tool that will show the users what communities the admin/mod has hidden from showing up in the new/all/hot feeds and allow them to uncheck it for themselves if they wish.

I feel like this is a reasonable compromise and I hope that this will help give a balanced experience for all. Critiques, criticism, refinements, suggestions, questions, all are welcome in the comments and are much appreciated.

20
10
Initial thoughts (self.lemmy_mod_tools)
submitted 2 years ago* (last edited 2 years ago) by jgrim to c/lemmy_mod_tools
 
 

Welcome

Hello everyone,

I can tell there is a lot of interest in a tool to enhance the mod abilities of Lemmy.

I've been researching ways to do it. I have six options that I'm considering.

I'm trying to keep the topics high-level and avoid implementation discussions.

These are undeveloped high-level thoughts. I still have a lot of research into the Lemmy code base and ActivityPub.

It's late for me, so these notes are from a stream of thought. There may be changes as I think more about this or get community feedback.

Option 1 - SaaS

Summary

A service I host. Probably maintained by donations. I might have an additional tier for enhanced moderation tools.

Pros:

  • Simple setup/registration.
  • No additional hosting cost for Lemmy admins.
  • Ability to use community-shared moderation.
  • Community data analysis to find bad actors across the federation.
  • Quicker to develop. No need to worry about tech support, setup, configuration wizards, etc.
  • Could be set up with any ActivityPub instance. (Mastodon, Lemmy, etc.).
  • If successful, it could be my full-time gig.
  • I would be more engaged

Cons:

  • I have to store personally identifiable information & manage other compliance regulations.
  • Feature releases are limited to a single pipeline.
  • You don't get your data.
  • No direct DB access to do advanced queries and changes.
  • Limited by ActivityPub capabilities.

Option 2 - Self-hosted

Summary

Admins would have to install, set up, and maintain the software.

Pros:

  • No cost for me to run anything.
  • Admins store their data.
  • Direct access to databases to do advanced querying.
  • Open source community project requiring/allowing outside development

Cons:

  • Additional network traffic if shared moderation is used.
  • Link flooding and spam tracking across instances aren't as likely to exist.
  • Would mostly be purpose-built for Lemmy. No additional federation support.
  • I would be less engaged and rely on community support

Option 3 - SaaS that requires a self-hosted service

Summary

This is sort of a mix of the above. The interface and interaction are central; however, an agent must be installed to interface with the Lemmy instance.

Pros:

  • Best of both worlds.
  • A custom API could be built to work with the adapter service. Not limited by ActivityPub.

Cons:

  • Worst of both worlds.
  • Purpose-built for Lemmy to start. (Adapters for other systems could be built).
  • More complex to build.
  • Additional server cost and maintenance. More complicated setup for admins.

Option 4 - SaaS & Self-Hosted

Summary

You could use the SaaS solution or install it locally.

Pros:

  • I could charge for SaaS to finance the project.
  • All the pros of SaaS and Self-hosted
  • The people have choice

Cons:

  • More complex to build
  • All the cons of SaaS and Self-hosted

Option 5 - Ask the devs to do it

Summary

We tell the devs of Lemmy to include better tools and how we want them.

Pros:

  • Less work for me/us
  • Moderation built into Lemmy and not a separate app

Cons:

  • We have to wait for them to do it all
  • They might say no

Option 6 - Contribute to the dev repo

Summary

Build enhancements to the core of Lemmy's moderation tools.

Pros:

  • Community support to add features
  • Users only have to update to get new features, no configuration
  • No additional server costs

Cons:

  • Only works on Lemmy
  • Lemmy becomes a Monolithic application - scaling challenges
    • Pro: A flag could be added to only enable moderation-specific scheduled tasks on a single instance.
21
8
submitted 2 years ago* (last edited 2 years ago) by Wander@yiffit.net to c/lemmy_mod_tools
 
 

Apologies for this second post. This one should be the last. I've continued thinking about the best way to handle moderation in the fediverse, especially because I believe that the fediverse as a whole will live or die based on how moderation is handled.

I've worked in the past with email products, email being one of the clearest examples of federated networks. Even though email is quite different since there's no public messages, I believe we can draw some inspiration regarding how to handle federation and moderation. More specifically regarding a key metric that also applies to the fediverse: the degree of "spamminess" of a message.

But let's first see some of the principles.

Principle 1: Local content, local values

This principle states that instance administrators should not be expected to host or make available content that goes against their values and beliefs. What is included in this principle:

  • Restricting certain content in local communities.
  • Restricting local users from displaying certain behavior, even when commenting in remote communities.
  • Silencing (not yet implemented) or removing individual remote communities so that they do not show in their list of communities or their feed, thus not giving them a platform.

Most important this principle incentivizes every user to find a home instance that matches their beliefs and values.

Principle 2: Everyone cleans their own turf

Reports made about remote users or content in remote instances should be handled first and foremost by remote community moderators and, in second instance, by remote admins.

Users should block content or users after reporting and enough time should be allowed for remote instances, especially smaller ones, to react to a report.

Local admins might act immediately if it's an urgent matter such as doxxing and private information, or they can issue a temp ban of, for example 3 days, during which the remote community has time to catch up. Ideally, however, local admins will not have to deal with issues caused by remote users on remote communities. This is because it's not feasible for smaller instances to do moderate the whole userbase of a large instance. We need to learn to delegate those reports and have them resolve remotely even if we receive them.

If the remote instance does not moderate effectively or according to our beliefs and values, the third principle comes into play.

Principle 3: Users and communities have a degree of spamminess and of utility.

Let's talk about spamminess first. Basically every interaction can be judged on its degree of "spamminess".

  • If a remote user forces itself into a community to send offensive messages, that's spam.
  • If a remote user is sharing their opinion in a thread created specifically to discuss such opinions, that's not spam.

Whether a comment is spammy or not depends on the nature of the interaction. If someone is going out of their way to cause drama, the interaction is probably spammy. Basically we need to ask ourselves if the user has been asked or is allowedto share their opinion.

Spammy users should be banned by the instance that hosts them. If that's not the case, then this could count as a strike against the user's instance. If a remote instance has been given enough time to fix issues and they still keep enabling spammy users, then it could be grounds for a block not unlike there's blocklist for mail servers that send spam.

Examples of spammy behavior:

  • Brigading
  • Trolling
  • Going out of their way to cause drama or irritate others
  • Concern trolling
  • And of course, not correctly setting NSFW flags These are all examples of forced / involuntary interactions that should be avoided at all costs.

Now what about remote users that display a controversial opinion in threads where they were asked to share such opinion (ie, they are civil)?

In that case the instance admin should ask themselves about the utility of this remote user or remote instance.

Provided they act in a civil manner and they are not spammy it might be reasonable to a) not act against them, b) silence them (not yet implemented on lemmy) or c) block them in local communities only (not yet implemented on lemmy).

A civil user with controversial opinion, depending on the context and what these opinions are might still have some utility. For example, they could contribute positively in other places with tech guides, interesting content, etc... and we do not want to be overzealous in blocking them. Maybe it's something we can leave up to each user (thus the importance of users learning to block).

Anyways, the idea here is that the admin team needs to make a judgement call on the perceived utility and decide which action is better. Given that the user is civil maybe a silence or a block on local communities suffices. This is all relative and every admin will need to decide on their own.

The most important point: whether a civil user with controversial opinions is banned, silenced or otherwise, the user's home instance should not be affected. Mostly. Let me explain.

Regarding instances themselves, they also have a utility score. For example, if an instance is solely dedicated to the support of values that I find strongly offensive, then there's little point in federating with them. It's unlikely that I'll get any net utility from either their users or communities.

However, this could be different with large general instances where maybe I'll end up flagging 1000 different users which are civil but have controversial opinions, but I still get utility from the other 99%.

Of course this only works if these remote users are not spammy. If a remote instance is large and enables spammy users as described above, then this 1% of users could very well cause me to block the whole instance, especially if we are constantly harassed by them. I suspect this is what could have happened recently with beehaw, but I don't want to get into that since this post is about general guidelines that I've been thinking about.

In summary, regarding principle 3:

  • Spammy users are bad actors that drastically lower the utility of the remote instance that hosts them
  • Instances that enable spammy users are bad actors have drastically low utility.
  • Remote users that are civil but have controversial opinions have lower utility, but action can be variable depending on context.
  • The severity of an action should depend on the utility of a user or an instance.

And this brings me to my last point: We instance admins need to be extremely realistic about the utility that our userbase derives from remote instances and remote users.

I can't emphasize this enough. Suppose I'm an instance admin and I see one of these civil users with controversial opinions in the wild, I can't fucking go on a crusade and threaten to defederate the whole instance because they allowed a discussion to happen the contents of which I don't agree with. I can't use my userbase as a blunt tool to threaten defederation from instances who don't have my same world view.

Referring back to the first principle, as instance admin it's understandable that I don't want to host or platform certain opinions and I need all the tools to block these remote communities, users and even instances that are solely or overwhelmingly dedicated to something I strongly oppose.

However, if we want federated alternatives to succeed it does not make sense for Gmail block Outlook because Sundar Pichai doesn't like Satya Nadella's world view / politics / opinions. That would be weaponizing of your userbase.

Which brings me to the last principle:

Principle 4: don't weaponize your userbase to try to impose your values

  • If you don't like X, Y, Z remote communities on your instance, hide them or block them (this covers the first principle of not giving a platform to content you strongly disagree with).
  • If a remote user is spammy or an instance enables spammy behavior, block them.
  • If a remote user or remote instance is dedicated so overwhelmingly to something that you and your users see no value in federating with them, silence them or block them.
  • However, instance admins should not threaten with defederation because a remote instance which otherwise has plenty of utility, has some aspects that they disagree with, especially if civility is maintained. At worst, that remote instance should be unlisted from public timelines or made "follower-only" (following the first principle), but not outright blocked.

The reason I bring this up is because I have a huge fear that we could end up waging petty wars and splitting up the fediverse, decreasing the overall usefulness of federated alternatives. If this happens we will never succeed.

In summary:

  • You're not forced to platform content you disagree with
  • Focus on moderating your instance and your users and ask other instances to keep theirs moderated.
  • Go harsh against spammy interactions, be moderate if it remains civil (unless utility is definitely negative. Be realistic and admit to yourself that a single controversial discussion won't eliminate the utility of an instance that's otherwise fine.).
  • Don't threaten with de-federating from an instance that your users find useful only because it allows civil non-spammy content that you disagree with. At worst make it subscriber-only or block only specific communities so that you don't give a platform to the parts that you disagree with. If in doubt let your users block remote content.

Note: making a remote instance's content invisible unless a user is subscribed to that instance's community (unlisted/subscribers-only) is not a feature that lemmy currently supports but I hope will be implemented soon.

22
 
 

Here's a laundry list of sort with tons of tools we'd like to see

  • Role for approval of applications (to delegate)
  • Site mods (to delegate from admins)
  • Auto-report posts with certain keywords or domains (for easier time curating without reports)
  • Statistics on growth (user, comments, posts, reports)
    • User total
    • MUA
    • User retention
    • Number of comments
    • Number of posts
    • Number of reports open
    • Number of reports resolved
  • Sort reports
    • by resolved/open
    • by local/remote
  • Different ways to resolved a report
    • Suspend account for a limited amount of time rather than just banning
    • Send warning
  • Account mod info
    • Number of 'strikes' (global and local) and reports
    • Moderation notes
    • Change email
    • Change password
    • Change role
  • Ability to pin messages in a post
  • Admins should be able to purge
  • Filter modlog to local
  • Better federation tools (applications to communities, limiting)
    • Applications to communities to allow safe spaces to exist (people should not be able to just "walk in" on a safe space - similarly to follow requests in Mastodon in a way)
    • Limiting (Lock our communities down from certain instances but still allow people using our instance to talk to people from those instances)

Obviously considering the moment when this is being made - federation tools are our highest priority.

23
5
submitted 2 years ago* (last edited 2 years ago) by Wander@yiffit.net to c/lemmy_mod_tools
 
 

Hello! I'll try to present my view on how instance moderation can be handled in the fediverse in order for small instances to be able to exist. This view tries its best to keep federation while also making it possible for a small instance with limited moderators handle things.

Please note that I've been cultivating this for a while now. It is not related to any recent events. It is also primarily applicable to Mastodon, but I'm trying to adapt it to lemmy.

Basically it goes like this: Focus on moderating content in this order. The lower the number the higher the priority.

  1. Content sent by your instance's users
  2. Content sent to your instance's users or communities by remote users.

...

  1. Content sent between remote users in remote communities

Basically as a moderator for instance A, I don't need to know right away that a user from instance B said something controversial in a community of instance C. I might want to not care about it at all.

While it's true that my users while see this content through my instance and will likely report it because it is controversial / offensive / problematic / etc... I have limited resources and need to be able to rely on the mod team of instance B and instance C to do their job first and handle that scenario.

As for the users, they should of course report content they believe violates the rules, but they should also learn to rely more often on the block button, whether it is fore remote users, remote communities and hopefully in future versions of lemmy being able to block remote instances.

If I wanted something from an automated moderation tool it would be the following:

  • Keep track of how often a remote user is reported for remote content on a remote community over time, giving them one strike for every day there's one or more of such reports.

That way, if the user collects ten strikes over time, for example, I could have a look at whether I believe or not that this user's home instances is enabling toxic behavior or, if that user ever comes to communities in my instance I'll have him flagged and will know exactly for what. The benefit here is that I can take things much slower because it's a remote user on a remote community and I don't need to act immediately.

There's some exceptions such as illegal content that could harm my instance by caching it, but overall most reports I've ever received are due to toxic behavior which my instance's users should learn to block while the remote mods do their job.

Regarding priorities 1 and 2 For content generated by my instance's users, this is where I need to be quick. Just like I want to rely on remote moderators to do their job, remote moderators will want to rely on me to do my job when it involves users of my instance.

Also, if there's remote users harassing local users or leaving toxic comments in our communities or posts, as an instance admin I will need to be quick but I will also have to rely on the moderators of a specific community.

To be honest the burden of moderating a community should be placed on the creator / moderator of that community. As an instance admin this allows me to, again, be more reactive while I know that the owners of that community are cleaning up stuff. Thus even if I receive a report, I should wait to let the community moderators handle it.

Only in this way, is it possible to keep federated with a large amount of instances as a small instance with few moderation resources.

In summary:

  1. Make sure local users behave when they're in remote communities.
  2. Make sure your local communities follow the instance's rules
  3. Let community moderators handle conflict and moderate their community as they see fit (within boundaries). Only step in if thing escalate, get out of hand or there's a larger "raid" / harassment campaign.
  4. Hold community owners and moderators accountable to moderate their own spaces.
  5. Let remote moderators and admins do their job if stuff happens on remote instances between remote users.
  6. Potentially keep track of such scenarios that were reported to you by local users, if anything to have some data in order to avoid a bad actor if they were ever to come across your instance or to determine if there's an instance that's not moderating properly.

This means that it's very important for instance admins to give remote instances and remote community moderators time to handle a situation. Especially smaller instances might take a few hours or even a couple of days to deal with a situation. Unless it's a serious life-or-death scenario such as maybe doxxing, admins and moderators should tell their users to block, report and move on, as it could and should take a bit of time to do things properly.

One aspect I didn't mention is toxic remote communities. In this case I might "remove" the community so it isn't accessible from my instance and I'm not giving it a platform. In case the whole instance is dedicated to toxic communities, then I might block the instance as a whole.