this post was submitted on 25 Dec 2024
1096 points (93.5% liked)

You Should Know

33445 readers
667 users here now

YSK - for all the things that can make your life easier!

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must begin with YSK.

All posts must begin with YSK. If you're a Mastodon user, then include YSK after @youshouldknow. This is a community to share tips and tricks that will help you improve your life.



Rule 2- Your post body text must include the reason "Why" YSK:

**In your post's text body, you must include the reason "Why" YSK: It’s helpful for readability, and informs readers about the importance of the content. **



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding non-YSK posts.

Provided it is about the community itself, you may post non-YSK posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you harass or discriminate against any individual member, you will be removed.

If you are a member, sympathizer or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people and you were provably vocal about your hate, then you will be banned on sight.

For further explanation, clarification and feedback about this rule, you may follow this link.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- The majority of bots aren't allowed to participate here.

Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.



Partnered Communities:

You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.

Community Moderation

For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.

Credits

Our icon(masterpiece) was made by @clen15!

founded 2 years ago
MODERATORS
 

He generally shows most of the signs of the misinformation accounts:

  • Wants to repeatedly tell basically the same narrative and nothing else
  • Narrative is fundamentally false
  • Not interested in any kind of conversation or in learning that what he’s posting is backwards from the values he claims to profess

I also suspect that it’s not a coincidence that this is happening just as the Elon Musks of the world are ramping up attacks on Wikipedia, specially because it is a force for truth in the world that’s less corruptible than a lot of the others, and tends to fight back legally if someone tries to interfere with the free speech or safety of its editors.

Anyway, YSK. I reported him as misinformation, but who knows if that will lead to any result.

Edit: Number of people real salty that I’m talking about this: Lots

you are viewing a single comment's thread
view the rest of the comments
[–] douglasg14b@lemmy.world 76 points 2 days ago (2 children)

It's likely this is a bot if it's wide spread. And Lemmy is INCREDIBLY ill suited to handle even the dumbest of bots from 10+ years ago. Nevermind social media bots today.

[–] kava@lemmy.world 13 points 2 days ago (3 children)

To be fair, it's virtually impossible to tell whether a text was written by an AI or not. If some motivated actor is willing to spend money to generate quality LLM output, they can post as much as they want on virtually all social media sites.

The internet is in the process of eating itself as we speak.

[–] douglasg14b@lemmy.world 6 points 1 day ago (1 children)

You don't analyze the text necessary, you analyze the heuristics, behavioral patterns, sentiment....etc It's data analysis and signal processing.

You, as a user, probably can't. Because you lack information that the platform itself is in a position to gather and aggregate that data.

There's a science to it, and it's not perfect. Some companies keep their solutions guarded because of the time and money required to mature their systems & ML models to identify artificial behavior.

But it requires mature tooling at the very least, and Lemmy has essentially none of that.

[–] kava@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

yes of course there are many different data points you can use. along with complex math you can also feed a lot of these data points in machine learning models and get useful systems that can perhaps red flag certain accounts and then have processes with more scrutiny that require more resources (such as a human reviewing)

websites like chess.com do similar things to find cheaters. and they (along with lichess) have put out some interesting material going over some of what their process looks like

here i have two things. one is that lichess, which is mostly developed and maintained by a single individual, is able to maintain an effective anti-cheat system. so I don't think it's impossible that lemmy is able to accomplish these types of heuristics and behavioral tracking

the second thing is that these new AIs are really good. it's not just the text, but the items you mentioned. for example I train a machine learning model and then a separate LLM on all of reddit's history. the first model is meant to try and emulate all of the "normal" human flags. make it so it posts at hours that would match the trends. vary the sentiments in a natural way. etc. post at not random intervals of time but intervals of time that looks like a natural distribution, etc. the model will find patterns that we can't imagine and use those to blend in

so you not only spread the content you want (whether it's subtle product promotion or nation-state propaganda) but you have a separate model trained to disguise that text as something real

that's the issue it's not just the text but if you really want to do this right (and people with $$$ have that incentive) as of right now it's virtually impossible to prevent a motivated actor from doing this. and we are starting to see this with lichess and chess.com.

the next generation of cheaters aren't just using chess engines like Stockfish, but AIs trained to play like humans. it's becoming increasingly difficult.

the only reason it hasn't completely taken over the platform is because it's expensive. you need a lot of computing power to do this effectively. and most people don't have the resources or the technical ability to make this happen.

[–] ByteOnBikes@slrpnk.net 4 points 1 day ago* (last edited 1 day ago) (2 children)

spend money to generate quality LLM output, they can post as much as they want on virtually all social media sites.

$20 for a chatgpt pro account and fractions of pennies to run a bot server. It's really extremely cheap to do this.

I don't have an answer to how to solve the "motivated actor" beyond mass tagging/community effort.

[–] kava@lemmy.world 5 points 1 day ago (1 children)

$20 for a chatgpt pro account and fractions of pennies to run a bot server. It’s really extremely cheap to do this.

openAI has checks for this type of thing. They limit number of requests per hour with the regular $20 subscription

you'd have to use the API and that comes at a cost per request, depending on which model you are using. it can get expensive very quickly depending on what scale of bot manipulation you are going for

[–] finder585@lemmy.world 4 points 1 day ago

openAI has checks for this type of thing.

Yep, any operation runs the risk of getting caught by OpenAI.

See this article of it happening:

https://openai.com/index/disrupting-a-covert-iranian-influence-operation/

[–] douglasg14b@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Heuristics, data analysis, signal processing, ML models...etc

It's about identifying artificial behavior not identifying artificial text, we can't really identify artificial text, but behavioral patterns are a higher bar for botters to get over.

The community isn't in a position to do anything about it the platform itself is the only one in a position to gather the necessary data to even start targeting the problem.

I can't target the problem without first collecting the data and aggregating it. And Lemmy doesn't do much to enable that currently.

[–] vga@sopuli.xyz 1 points 1 day ago* (last edited 1 day ago)

But something like Reddit at least potentially has the resources to throw some money at the problem. They can employ advanced firewalls and other anti-bot/anti-AI thingies. It's very possible that they're pioneering some state-of-the-art stuff in that area.

Lemmy is a few commies and their pals. Unless China is bankrolling them, they're out of their league.

[–] Willy@sh.itjust.works 18 points 2 days ago* (last edited 2 days ago) (1 children)

Ur a bot. I can tell by the ~~pixels~~ unicode.

Edit: joking aside you bring up a good point and our security through ~~anonymity~~ cultural irrelevance will not last forever. Or maybe it will.

[–] douglasg14b@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

Unfortunately it won't, assuming Lemmy grows.

Lemmy doesn't get targeted by bots because it's obscure, you don't reach much of an audience and you don't change many opinions.

It has, conservatively, ~0.005% (Yes, 0.005%, not a typo) of the monthly active users.

To put that into perspective, theoretically, $1 spent on a Reddit has 2,000,000x more return on investment than on Lemmy.

All that needs to happen is that number to become more favorable.