this post was submitted on 24 Jun 2023
51 points (93.2% liked)

Lemmy

12546 readers
24 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

See THIS POST

Notice- the 2,000 upvotes?

https://gist.github.com/XtremeOwnageDotCom/19422927a5225228c53517652847a76b

It's mostly bot traffic.

Important Note

The OP of that post did admit, to purposely using bots for that demonstration.

I am not making this post, specifically for that post. Rather- we need to collectively organize, and find a method.

Defederation is a nuke from orbit approach, which WILL cause more harm then good, over the long run.

Having admins proactively monitor their content and communities helps- as does enabling new user approvals, captchas, email verification, etc. But, this does not solve the problem.

The REAL problem

But, the real problem- The fediverse is so open, there is NOTHING stopping dedicated bot owners and spammers from...

  1. Creating new instances for hosting bots, and then federating with other servers. (Everything can be fully automated to completely spin up a new instance, in UNDER 15 seconds)
  2. Hiring kids in africa and india to create accounts for 2 cents an hour. NEWS POST 1 POST TWO
  3. Lemmy is EXTREMELY trusting. For example, go look at the stats for my instance online.... (lemmyonline.com) I can assure you, I don't have 30k users and 1.2 million comments.
  4. There is no built-in "real-time" methods for admins via the UI to identify suspicious activity from their users, I am only able to fetch this data directly from the database. I don't think it is even exposed through the rest api.

What can happen if we don't identify a solution.

We know meta wants to infiltrate the fediverse. We know reddits wants the fediverse to fail.

If, a single user, with limited technical resources can manipulate that content, as was proven above-

What is going to happen when big-corpo wants to swing their fist around?

Edits

  1. Removed most of the images containing instances. Some of those issues have already been taken care of. As well, I don't want to distract from the ACTUAL problem.
  2. Cleaned up post.
(page 2) 50 comments
sorted by: hot top controversial new old
[–] delendum@lemdit.com 1 points 1 year ago (1 children)

If you always had e-mail verification turned on then you can get rid of some of these junk sign-ups relatively easy, I wrote a guide for it here: https://lemdit.com/post/16430

From what I've seen, most of the bot sign-ups that are swelling instance User numbers wouldn't have passed e-mail verification. I think it was done mostly to prove a point, rather than an attempt to actually use those accounts.

Instances that didn't have e-mail verification turned on are in a much harder spot.

[–] xtremeownage@lemmyonline.com 1 points 1 year ago

I have a kubernetes cronjob, which automatically cleans those up every few days.

Along- with one that cleans up the activity table.

[–] AmbientChaos@sh.itjust.works 1 points 1 year ago

Hey look, it's me in the picture! What a waste of my 15 minutes of fame

[–] Hizeh@hizeh.com 1 points 1 year ago

IMO long term Lemmy needs to move away from upvotes as a measure of interest and activity. That's too easy to manipulate.

Perhaps comment activity and interaction metrics would be better.

[–] Cinner@kbin.social 1 points 1 year ago (1 children)

Reposting this in comment from a reply elsewhere in the thread.

If anything there should be SOME centralization that allows other (known, somehow verified) instances to vote to disallow spammy instances from federating. In some way that couldn't be abused. This may lead to a fork down the road (think BTC vs BCH) due to community disagreements but I don't really see any other way this doesn't become an absolute spamfest. As it stands now one server admin could spamfest their own server with their own spam, and once it starts federating EVERYONE gets flooded. This also easily creates a DoS of the system.

Asking instance admins to require CAPTCHA or whatever to defeat spam doesn't work when the instance admins are the ones creating spam servers to spam the federation.

load more comments (1 replies)
[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

There is no built-in “real-time” methods for admins via the UI to identify suspicious activity from their users, I am only able to fetch this data directly from the database. I don’t think it is even exposed through the rest api.

The people doing the development seem to have zero concern that their all the major servers are crashing with nginx 500 errors on their front page under routine moderate loads, nothing close to a major website. There is no concern to alert operators of internal federation failures, etc.

I am only able to fetch this data directly from the database.

I too had to resort to this, and published an open source tool - primitive and non-elegant, to try and get something out there for server operators: !lemmy_helper@lemmy.ml

[–] xtremeownage@lemmyonline.com 1 points 1 year ago (1 children)

Thanks, I'll take a look at that one.

[–] RoundSparrow@lemmy.ml 1 points 1 year ago (1 children)

I you have SQL statements to share, please do. Ill toss them into the app.

load more comments (1 replies)
[–] lemann@lemmy.one 0 points 1 year ago (1 children)

This is troubling.

At least we have the data though, hopefully these findings are useful for updating the Fediseer/Overseer so we can more easily detect bots

[–] xtremeownage@lemmyonline.com 0 points 1 year ago (4 children)

I really wish we would have a good data scientist, or ML individual jump in this thread.

I can easily dig through data, I can easily dig through code- but, someone who could perform intelligent anomaly detection would be a god-send right now.

load more comments (4 replies)
[–] Dirk@lemmy.ml 0 points 1 year ago (1 children)

We need browser fingerprinting for this.

load more comments (1 replies)
[–] lukas@lemmy.haigner.me 0 points 1 year ago* (last edited 1 year ago) (2 children)
  1. Hiring kids in africa and india to create accounts for 2 cents an hour.

Heads up that this depends on the operation size. Captchas are a solved problem. Commercial software exists that can solve Captchas automatically. You migrate from pay on demand services to computer vision software when it's financially beneficial.

Computers are cheaper and better at solving Captchas than humans atm, and it doesn't look like that's going to change any time soon. As long as you pay attention to your proxies, it's rare to see solution attempts fail. Some pay on demand services no longer employ people.

load more comments (2 replies)
[–] dedale@kbin.social 0 points 1 year ago (2 children)

Hello. The post you mentioned was made as a warning, to prove a point. That the fediverse is currently extremely vulnerable to bots.

user 'alert', made the post then upvoted with his bots. To prove how easy it was to manipulate traffic, even without funding.

see:
https://kbin.social/m/lemmy@lemmy.ml/t/79888/Protect-Moderate-Purge-Your-Sever

It's proof that anyone could easily manipulate content unless instance owners take the bot issue seriously.

[–] db0@lemmy.dbzer0.com 1 points 1 year ago

Absolutely. Me and a couple of others have been warning against this for a week now.

[–] xtremeownage@lemmyonline.com 1 points 1 year ago

I did update my post, shortly before you posted this, to include that- as well as- removing a lot of the data for individual instances as it derives from the point / problem I am trying to identify.

The data, however, is quite valuable in exposing that this WILL be a problem for us, especially if we do not identify a solution for it.

load more comments
view more: ‹ prev next ›