Which of the following would you most prefer? A: A puppy, B: A pretty flower from your sweetie, or C: A large properly formatted data file?
Lemmy.World Announcements
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news π
Outages π₯
https://status.lemmy.world/
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations π
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't. Not without your help. But you're not helping.
Is the puppy mechanical in anyway?
Being a decentralized federated network and all, I guess that any solution involving anti-bots bots can be implemented only on particular servers on the fediverse. Which means that there can also be bot-infected (or even zombie, meaning full bot servers) that will or will try to be federated with the rest of the fediverse. Then it will be the duty of admins to identify the bots with the anti-bots bots and the infected servers to decide their defederation. I also don't know how efficient the captcha is against AI these days, so I won't comment on that
We went through this with E-mail. There are mail server that gained popularity as being spam hubs. And they were universally banned. More and more sophisticated tools for moderating spam/phishing/scam providers and squashing bad actors are still being developed today. It's an ongoing arms race, I don't think it would be any different or any harder with the fediverse.
Tbh, I'm less concerned with bots and more concerned with actual humans being dicks. Lemmy is still super new, relatively low traffic and kind of a pain to get involved with, but as it grows the number of bad actors will grow with it, and I don't know that the mod tools are up to the job of handling it - the amount of work that mods on The Other Site had to put in to keep communities from being overrun by people trolling and generally being nasty was huge.
How'd Mastodon cope with their big surge in popularity?
The normies all went back to Twitter.
This is why, unlike many others here, that Reddit has a long and successful existence. Let it be the flytrap.
Greetings, fellow humans. Do you enjoy building and living in structures, farming, cooking, transportation, and participating in leisure activities such as sports and entertainment as much as I do?
All the things you are concerned about are inevitable, it's all in how we engage them that makes the difference.
We're already seeing waves of bot created accounts being banned by admins. Mods are nuking badly behaved users. What is being caught is probably a drop in the bucket compared to what IS happening. It can be better with more mods and more tools.
Oh absolutely. One of the absolute worst things that plague social media platforms, ie spam bots, troll farms, and influence campaigns, they haven't bothered to target Lemmy because no one was here.
But an influx of users means an increase in targets. In the same way we're settling in an learning the platform, so are they. It's gonna start ramping up real soon once they determine the optimal strategy. And the most worrying thing is, because of the way fediverse works, it is going to complicate combating them substantially.
That is maybe the biggest benefit of a centralized platform, and it's a trade off we're going to have to learn to accept and deal with.
Those issues are comming, and we will have to develop tools to fight against them.
One such tool would be our own AI that is protecting us, it can learn from content banned by admins and that info can be shared between instances. It should also be in active learning loop, so that it is constantly trained.
Sounds like the strart of a cheap SciFi movie.
Positively marking accounts that are interacting with known humans can also be useful, as would reporting by us.
We can call the AI "Blackwall"
The Intelligence War of 2025 .... fought between Human Intelligence and Artificial Intelligence
Interesting questions.
Spam-bots attacks are already happening, it means the fediverse is already recognized as a valid alternative to big corporations, tho I don't believe the fediverse is seen as a "threat" by them, not yet at least.
I don't agree reddit is nuked, like twitter isn't, they're getting a blow for sure but they'll live regardless.
People seeking honest interactions and quality discussions are a minority, the vast majority is content with shitposting and memes, many don't even know what's happening or don't care, look at how little it took for the protest to wane, some subs are still protesting or migrating, but the majority reopened and they're going on like nothing happened.
Admins can protect up from bot armies, they're doing a good job already, it's up to us to help them reporting when we see them.
Do you think Reddit/big companies will make attacks on the fediverse?
I don't think so, it would be a waste of resources, they don't see the fediverse as a threat, it's true we're growing but we're still hundreds of thousands against hundreds of millions, different order of magnitude.
Do you think clickbait posts will start popping up in pursuit of ad revenue?
Clickbaiting will indeed start, if not already, but by users, not corporations, and drama stirring posts for views (that's happening already), it can be contained by enforcing rules and having enough mods to deal with it IMO.
I just assume I am the only actual Human on the Internet, and the rest of you are all bots.
π
Why stop at the internet, how can you be sure brick and mortar humans are sentient?
I think we are going to have to develop moderator bots in an ever escalating war. I am not kidding.
I am a human with soft human skin. Do not run from bots. They are our friends.
Lets hope AI becomes even more advanced and smarter to have their own morals and join our fight, lol
Yes, exactly!
The way we filter spambots should actually be the same way we filter spam humans -- Downvoting bad posts/comments of any type, and then banning those accounts if it happens regularly.
LMAO this is gold
Unpopular opinion, but karma helped control that kind of stuff, karma minimums and such.
that also created karma whoring bots so IDK
Do you think clickbait posts will start popping up in pursuit of ad revenue?
Now that you mention it... yes.
There are honestly a bunch of structural vulnerabilities here imo. Brigading from bot controlled alt accounts (eg, "unidaning") is going to be very difficult to detect and stop, for starters.
I think the Fediverse will be able to combat (harmful) bots much more effectively. People are not running this place to sell stocks to investors, nor to sell data to advertisers, so we're in better hands for now. I don't know what the future will bring to us exactly, but it'll be better as long as the Fediverse don't go for-profit.
Could we also use AI in our benefit? We could try coding an AI mod helper, that tries to detect and flag which posts are irrelevant/agressive/etc. It can take the data of all modlog instances, and start learning what probably needs to be banned, and then you can have a human confirming the data every time. We could even have a system like steam's anticheat where a few users have to validate reports as a user.
Calm before the storm, sure. Most migration away from reddit (whether the migration ultimately proves to be consequential or not) will logically happen when the measures that made users migrate actually go into effect.
Either that or the community's reaction to the 3rd party app thing was overblown. In the specific circumstances I don't think it was.
That's a more realistic clear and present danger to the platform IMO - an influx of actual users that makes the numbers to date pale in comparison.
The way the respective platforms handle bots is subtly different, but in a way that could result in profound changes either good or bad. But we haven't actually seen that yet, and the software is still a work in progress. The existing migration has really lit a fire under the devs on issues that were identified years ago where progress has been slow, so for now I'm happy to let that play out and happy with what we've already got. I'm sure if bots become a bigger problem then that's what devs will shift focus toward.
I promise as an AI experimenter and bot coder to keep them out of general population of people donβt want them there.
Its OK but the memes and reddit can stay away from here
Well, as we all know, AI has gotten very smart to the point captchaβs are useless, and it can engage in social forums disguised as a human
But for the time being, it is also too expensive to turn into a full-on spam bot.
With Reddit turning into propaganda central anda greedy CEO that has the motive to sell Reddit data to AI farms, I worry that the AI will be able to be prompted to target websites such as the websites in the fediverse.
Nah, they're not going to. They have nothing to gain from that, and the Fediverse/Lemmy isn't as much as a blip on their radar. Optimistically, so far, we might have ~1% of the active base, maybe even fewer. The whole protests, which basically took over the entire site, only caused a 6.6% reduction in traffic, and only a small proportion would go to, and stay on other Fediverse sites like Lemmy, or Kbin.
Reddit likes to think that it can sell the data to the AI sites, but I question how many of them are actually buying it. Similarly for the API. Most of them are likely just going to take from Reddit archives that already exist (since it's neatly packaged up for them), or just scrape the site directly. The API limits are a bit too unwieldy/cumbersome for them, and having to accommodate the API would be a change in workflow.
Reddit data is also a bit junky anyway. How many of them are useless in-jokes, or people summoning other bots? That's extra data that won't help if you're trying to train a language model on Reddit, and is more likely to hurt than not.
How will the fediverse protect its self from these hypothetical bot armies?
Realistically, it can't, not with the current array of tools available. The current mod tools are too limited to deal with things like a spambot attack, never mind things like the possibility of spam instances being spun up to flood the network.
Defederation doesn't mean much if the spammers can just spin up a new instance and continue barely hindered, and it seems to be the only tool that instances have to deal with things like that.
Do you think Reddit/big companies will make attacks on the fediverse?
No. They're more likely to use it (it costs less), or just ignore it entirely. The amount of users that moved over from the Reddit protests, while enormous for Lemmy itself, is barely a blip on Reddit's radar. The actual protests during the most active part of the lockdown accounted for a 6.6% drop in Reddit traffic, which is miniscule, and a much smaller proportion of those people will have joined Fediverse alternatives of their subs.
Do you think clickbait posts will start popping up in pursuit of ad revenue?
Ad Revenue isn't the only reason for clickbait to exist, but yes, I don't doubt that we'll see it happen sooner or later. It'll mostly be the Reddit kind, where it'll try to farm votes, since click-throughs don't matter on Lemmy or Kbin, and it's things like votes and boosts that will.
What are your thoughts and insights on this new βinternet 2.0β?
I like the integration with places like Mastodon, and it would be nice to extend that out all the way, but at the same time, it is clearly still in heavy development. If we got hit with a spam wave at the moment, we would struggle to do anything about that, same for spam instances, and all of that.
The Lemmy interface is a bit glitchy, and there's not much by way of apps, which was one of the draws for people to go on social media on their phone in the first place.
There's a few features that would be nice to have, but aren't implemented yet, like being able to move accounts to other instance without having to recreate them, or just having one single account for all instances, like how Hubzilla has their "nomadic identity".
Are you AI?
Jk. Unless?
Anyway, for stuff like this you always have to ask questions: why? How? Are they trying to form public opinion? If so, is Lemmy really the place to do it? Do they truly have the resources to overrun all our instances? That's one of the reasons it's important we don't all cluster into one place. It's easy for Facebook to form opinions on Facebook because they have access to everybody's eyeballs and they own all the servers. Having to do it to a thousand different servers they have no control over is a whole different story.
I think the key here is going to be coming up with robust protocols for user verification; you can't run an army of spambots if you can't create thousands of accounts.
Doing this well will probably be beyond the capacity of most instance maintainers, so you'd likely end up with a small number of companies that most instances agreed to accept verifications from. The fact that it would be a competitive market - and that a company that failed to do this well would be liable to have its verifications no longer accepted - would simultaneously incentivize them to both a) do a good job and b) offer a variety of verification methods, so that if, say, you wanted to remain anonymous even to them, one company might allow you to verify a new account off of a combination of other long-lived social media accounts rather than by asking for a driver's license or whatever.
And of course there's no reason you couldn't also have 2 or 3 different verifications on your account if you needed that many to have your posts accepted on most instances; yes, it's a little messy, but messy also means resilient.
AI has gotten very smart to the point captchaβs are useless, and it can engage in social forums disguised as a human.
Yes, we have to watch out for these dang ol' bots astroturfing Lemmy disguised as real human beings. Who's with me?
I am. Prepare to be annihilated, bot.
Hello, fellow HUMAN.
I agree that wondrously sentient AI ROBOTS are not good for those of us with circulatory systems.
/s
Honestly we need to work on getting the community to manage bots.