this post was submitted on 12 Jun 2023
387 points (99.2% liked)

Lemmy.World Announcements

29063 readers
3 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world/

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to info@lemmy.world e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 1 year ago
MODERATORS
 

At the time of writing, Lemmyworld has the second highest number of active users (compared to all lemmy instances)

Also at the time of writing, Lemmyworld has >99% uptime.

By comparison, other lemmy instances with as many users as Lemmyworld keep going down.

What optimizations has Lemmyworld made to their hosting configuration that has made it more resilient than other instances' hosting configurations?

See also Does Lemmy cache the frontpage by default (read-only)? on !lemmy_support@lemmy.ml

you are viewing a single comment's thread
view the rest of the comments
[–] PriorProject@lemmy.world 167 points 1 year ago (13 children)

I'm not an admin, but have followed the sizing discussions around the lemmyverse as closely as I can from my position of lacking first-hand knowledge:

  • lemmy.ml is the biggest instance by user count, but runs on incredibly modest 8-cpu hardware. Their cloud provider doesn't provide any easy scale up options for them, so they can't trivially restart on a bigger VM with their db and disk in place. I suspect this means that instance is going to suffer for a bit as they figure out what to do next.
  • lemmy.world on the other hand was running on a box at least twice as big as lemmy.ml at last count, and I believe they can go quite a bit bigger if they need to.
  • The lemmy.world admins also run mastodon.world and lived through the twitterpocalypse, seeing peak user registrations rates of 4k per hour. So this is not their first rodeo in terms of explosive growth, I'm sure that experience gives them some tricks up their sleeve.
  • The admin team is pretty clearly technically strong. If I recall correctly, ruud is a professional database admin. One of the spooky parts of Lemmy performance-wise is the db. If ruud or others on the admin team custom-tuned their pg setup based on their own analysis of how/why it's slow, they may be getting more performance per CPU cycle than other instances running more stock configs or that are cargo-culting tweaks that aren't optimal for their setup without understanding what makes them work.

I'm surprised that sh.itjust.works isn't growing faster. They also have a hefty hardware setup and seemingly the technical admins to handle big user counts. I wonder if it's a branding problem, where lemmy.world sounds inviting and plausibly serious where sh.itjust.works sounds like clowntown even though it's run by a capable and serious team.

[–] Pspspspspsps@lemmy.world 128 points 1 year ago* (last edited 1 year ago) (5 children)

I wonder if it's a branding problem, where lemmy.world sounds inviting and plausibly serious where sh.itjust.works sounds like clowntown

That was my thought process when choosing an instance tbh. I'm not a tech person, I looked at the list and lemmy.world was the first 'safest feeling' instance that had open sign up. I saw sh.itjust.works and didn't even check their sign up process, there was too many periods in the strange name and it just looks weird to me as someone not used to these things. Edit: spelling

[–] Z______@lemmy.world 32 points 1 year ago* (last edited 1 year ago) (1 children)

I definitely second the motion on it being a branding problem. Stuff like sh.itjust.works seem to me like something that dark basement tech nerds would come up with that is "edgy" and really only used by them and other people like them.

I'm not really into the ironic "edgy" aesthetic and part of the struggle with this transition for me has been orienting myself in the space because I don't want to commit to some "sketchy" edgelord URL

[–] darkwing_duck@sh.itjust.works 29 points 1 year ago (2 children)

something that dark basement tech nerds would come up with that is β€œedgy” and really only used by them and other people like them.

That's exactly what it is and why I love it. The whole thing about this federated networking is that it doesn't matter where you signed up.

[–] ericjmorey@lemmy.world 9 points 1 year ago (2 children)

Where you sign up entirely determines your local feed.

Just like with reddit, I don't use defaults.

[–] Mjb@feddit.uk 10 points 1 year ago (1 children)

The least useful of the three feeds

[–] ericjmorey@lemmy.world 1 points 1 year ago

Depending on where you sign up, it could be the most useful.

[–] ericjmorey@lemmy.world 3 points 1 year ago

Where you sign up greatly effects moderation and administration issues like updating the software the instance runs on to benefit from security fixes, optimizations, and enhancements.

[–] Guy_Fieris_Hair@lemmy.world 14 points 1 year ago* (last edited 1 year ago)

I do think join-lemmy.org could possibly be changed to show server usage/capacity and uptime. When I initially signed up I went for lemmy.ml because what the heck is the difference? Honestly I was having all kinds of timeouts and thought the entire lemmy-verse was probably struggling. I was concerned that was the experience everyone was getting that they were going to leave because it is unsustainable.

But I ended up seeing a page showing the uptime of serversnand lemmy.world was 100% (at that point). So I figured I'd start an account here. HOLLY CRAP IT IS SO MUCH FASTER. I would have had a hard time sticking around if it all worked like lemmy.ml.

I started a community on lemmy.ml. Wish I would have done it here.

[–] s4if@lemmy.world 7 points 1 year ago

nah, I'm bit regretting not signing up on their instance. sh.itjust.works is a cool name and can be a brag point. lol. lemmy world is a bit too generalist, but I won't migrate there as ruud (the admin of lemmyworld) is doing a good job managing the instance. I appreciate that. :)

I joined sh.itjust.works because of the name, but it seems to run pretty well.

[–] furrowsofar@beehaw.org 1 points 1 year ago

For what it is worth, I looked at sh.itjust.works . Reason I choose beehaw.org was they were more local, and had more local content and users. Plus the server focus and values seemed to fit me better. Yes their domain is a bit odd but that was not a factor for me.

[–] RetroEvolute@lemmy.world 48 points 1 year ago* (last edited 1 year ago) (1 children)

I originally signed up with sh.itjust.works, but I wanted to be on the instance with the majority of migrants.

Also, it sounds dumb, but I think the sh.itjust.works domain is just kinda weird, technically has a "curse word" in it (not that I personally care), and they don't support NSFW content (which isn't just used for porn). So, it didn't make sense to have that as my home instance. πŸ€·β€β™‚οΈ

Edit: Also, this is my first comment on here! Hello world! πŸ‘‹

[–] PriorProject@lemmy.world 13 points 1 year ago (1 children)

Yeah, I get it. Naming optics aside, it seems an instance with a lot of headroom relative to others, with a capable team. Would be near the top of my word-of-mouth options in spite of the idiosyncratic name.

It's been running a little slow today though so maybe not as much headroom as you think

[–] ClassyHatter@fedia.io 39 points 1 year ago

Lemmy.world was just migrated to a dedicated server: https://lemmy.world/post/75556

[–] Master@lemmy.world 30 points 1 year ago (3 children)

Can confirm... I didnt sign up for sh.itjust.works solely because of the name... I dont particularly want that attached to every post I make.

[–] lift@aussie.zone 6 points 1 year ago

Agreed. I have no idea what I’m doing, but lemmy.world sounded inviting - thus, I’m here.

Guess we're just different kinds of people...

Hah, you should have seen the wolfballs instance

[–] wheen@lemmy.world 14 points 1 year ago (1 children)

Can none of this scale horizontally? Every mention of scaling has been just "throw a bigger computer at it".

We're already running into issues with the bigger servers being unable to handle the load. Spinning up entirely new instances technically works, but is an awful user experience and seems like it could be exploited.

[–] PriorProject@lemmy.world 38 points 1 year ago* (last edited 1 year ago)

It's important to recall that last week the biggest lemmy server in the world ran on a 4-core VM. Anybody that says you can scale from this to reddit overnight with "horizontal scaling" is selling some snake oil. Scaling is hard work and there aren't really any shortcuts. Lemmy is doing pretty well on the curve of how systems tend to handle major waves of adoption.

But that's not your question, you asked if Lemmy can horizontally scale. The answer is yes, but in a limited/finite way. The production docker-compose file that many lemmy installs are based on has 5 components. From the inside out, they are:

  • Postgres: The database, stores most of the data for the other components. Exposes a protocol to accept and return SQL queries and responses.
  • Lemmy: The application server, exposes websockets and http protocols for lemmy clients... also talks to the db.
  • Lemmy-ui: Talks to Lemmy over websockets (for now, they're working to deprecate that soon) and does some fancy dynamic webpage construction.
  • Nginx: Acts as a web proxy. Does https encryption, compression over the wire, could potentially do some static asset caching of images but I didn't see that configured in my skim of the config.
  • Pict-rs: Some kind of image-hosting server.

So... first off... there's 5 layers there that talk to each other over the docker network. So you can definitely use 5 computers to run a lemmy instance. That's a non-zero amount of horizontal scaling. Of those layers, I'm told that lemmy and lemmy-ui are stateles and you can run an arbitrary number of them today. There are ways of scaling nginx using round-robin DNS and other load-balancing mechanisms. So 3 out of the 5 layers scale horizontally.

Pict-rs does not. It can be backed by object storage like S3, and there are lots of object storage systems that scale horizontally. But pict-rs itself seems to still need to be a single instance. But still, that's just one part of lemmy and you can throw it on a giant multicore box backed by scalable object storage. Should take you pretty far.

Which leaves postgres. Right now I believe everyone is running a single postgres instance and scaling it bigger, which is common. But postgres has ways to scale across boxes as well. It supports "read-replicas", where the "main" postgres copies data to the replicas and they serve reads so the leader can focus on handling just the writes. Lemmy doesn't support this kind of advanced request routing today, but Postgres is ready when it can. In the far future, there's also sharding writes across multiple leaders, which is complex and has its downsides but can scale writes quite a lot.

All of which is to say... lemmy isn't built on purely distributed primitives that can each scale horizontally to arbitrary numbers of machines. But there is quite a lot of opportunity to scale out in the current architecture. Why don't people do it more? Because buying a bigger box is 10x-100x easier until it stops being possible, and we haven't hit that point yet.

[–] Druidgrove@lemmy.world 14 points 1 year ago (2 children)

I'm now going to start incorporating "Sounds like clowntown" into my everyday conversations - that's funny!

[–] XTL@sopuli.xyz 4 points 1 year ago (1 children)

Mind you, it can sound a lot like clown world which is a phrase Nazis and other groups against progress love to use.

[–] sweetholymosiah@lemmy.ca 1 points 1 year ago* (last edited 1 year ago)

"clown world" was at least initially a reference to how the CIA meddles in the affairs of the world (Clowns In America).

[–] PriorProject@lemmy.world 3 points 1 year ago

Quit clowning around.

[–] tigerdactyl@sh.itjust.works 13 points 1 year ago (1 children)

I’ve been having issues registering for lemmy.world so I went with sh.itjust.works and it’s been great so far

[–] RisingSwell@lemmy.world 6 points 1 year ago (1 children)

I had issues trying for lemmy.world yesterday, but it worked fine today. I just waited a day because I figured between upgrades and a massive influx of new users it was probably gonna be a bit unstable sometimes.

Yeah I’m being patient, I can’t imagine the stress the hardware and humans behind all this are under. I did get my lemmy.world account registered eventually. Not sure what server to call home yet!

[–] darkwing_duck@sh.itjust.works 12 points 1 year ago* (last edited 1 year ago)

That's actually awesome for users of sh.itjust.works. Like myself.

[–] WaffleFriends@lemmy.world 12 points 1 year ago

I had a very similar thought process when choosing my instance. lemmy.world seemed like it would be more open to new users than an instance named sh.itjust.works. Idk why that was my thought process but I’m here now

[–] StrayPizza@lemmy.world 10 points 1 year ago

I hope lemmy.ml can upgrade at some point. A lot of the slowness I'm running into is trying to browse/discovery communities that happen to live on that instance.

[–] maltfield@monero.house 7 points 1 year ago (1 children)

Right, but if you don't have a cache setup, then the DB gets taxed. At a certain point a cache looses its benefit, but an enormous amount of savings can be made (to backend DB calls, for example) by just caching all API reads for ~60 seconds.

[–] andrew@radiation.party 8 points 1 year ago* (last edited 1 year ago) (1 children)

Ensuring there's no data leakage in those cached calls can be tricky, especially if any api calls return anything sensitive (login tokens, authentication information, etc) but I can see caching all read-only endpoints that return the same data regardless of permissions for a second or two being helpful for the larger servers.

It's also worth noting that postgres does its own query-level caching, quite aggressively too. I've worked in some places where we had to add a SELECT RANDOM() to a query to ensure it was pulling the latest data.

[–] maltfield@monero.house 4 points 1 year ago (1 children)

In my experience, the best benefits gained from caching are done before the backend and are stored in RAM, so the query never even reaches those services at all. I've used varnish for this (which is also what the big CDN providers use). In Lemmy, I imagine that would be the ngnix proxy that sits in-front of the backend.

[–] PriorProject@lemmy.world 3 points 1 year ago (3 children)

I haven't heard admins discussing web-proxy caching, which may have something to do with the fact that the Lemmy API is currently pretty much entirely over websockets. I'm not an expert in web-sockets, and I don't want to say that websockets API responses absolutely can't be cached... but it's not like caching a restful API. They are working on moving away from websockets, btw... but it's not there yet.

The comments from Lemmy devs in https://github.com/LemmyNet/lemmy/issues/2877 make me think that there's a lot of database query optimization low-hanging fruit to be had, and that admins are frequently focusing on app configs like worker counts and db configs to maximize the effectiveness of db-level caches, indexes, and other optimizations.

Which isn't to say there aren't gains in the direction your suggesting, but I haven't seen evidence that anyone's secret sauce is in effective web-proxy caches.

[–] maltfield@monero.house 5 points 1 year ago (1 children)

Yeah, that's exactly why I'm asking this question. All the effort seems to be going into the DB -- but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.

I didn't see any tickets about this on the GitHub, which is why I'm asking around to see if there's actually some very low-hanging-fruit for improving all the instances with a frontend RAM cache.

[–] PriorProject@lemmy.world 5 points 1 year ago

Yeah, that's exactly why I'm asking this question. All the effort seems to be going into the DB -- but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.

Much of your post seemed to focus on the techniques employed by lemmy.world, caching websocket responses in the web-proxy does not seem to prominently feature among those techniques.

If you're interested in advancing the state of the discussion around web-proxy caching, I'd consider standing up an instance to experiment with it and report your own findings. You wouldn't necessarily have to take on the ongoing expense and moderation headache of a public instance, you could set up with new user registrations closed, create your own test users, and write a small load generator powered by https://join-lemmy.org/api/ to investigate the effect of caching common API queries.

[–] s900mhz@beehaw.org 3 points 1 year ago (1 children)

I may be wrong, but there is a branch in the works (UI repo) that pulls the web socket out and replaces it all with http calls. So the web socket may not be here for long

[–] PriorProject@lemmy.world 1 points 1 year ago (1 children)

You're correct, the devs are already committed to deprecating the websocket API. This may make caching easier in the future and people may use it more as a result. I'm a little bit skeptical as most of the the heavy requests are from authenticated users, and web-proxy caching authenticated requests without risking serving them up to the wrong user is also non-trivial. But caching is not my area of expertise, there may be straightforward solutions here.

But my comment was in reference to current releases in use on real world Lemmy servers.

[–] s900mhz@beehaw.org 1 points 1 year ago (1 children)

Yes, I didn’t intend to downplay your comment. Caching at the proxy later with auth is something I am not familiar with. I never had to implement it in my career. (So far πŸ˜…) I just wanted to make it known that the web socket may be a thing of Lemmy past for anyone unaware

[–] PriorProject@lemmy.world 2 points 1 year ago (1 children)

Yes, I didn’t intend to downplay your comment.

I never interpreted it that way. Your comment was helpful, and I was expanding on it with more context. Lemmy on, friend.

[–] s900mhz@beehaw.org 1 points 1 year ago

Good to hear! Lemmy on 🐭✊

[–] yourstruly@dataterm.digital 3 points 1 year ago* (last edited 1 year ago)

I work on nginx cache modules for a CDN provider.

While websockets can be proxied, they're impractical to cache. There are no turn key solutions for this that I'm aware of, but an interesting approach might be to build something on top of NChan with some custom logic in ngx_lua.

I agree with you that web proxy cache's aren't the silver bullet solution. They need to be part of a more holistic approach, which should start with optimizing the database queries.

Caching with auth is possible, but it's a whole can of worms that should be a last resort, not a first one.

[–] isosphere@beehaw.org 6 points 1 year ago* (last edited 1 year ago)

sh.itjust.works

on paper i'd be on this instance but the name is quite terrible and gives me little confidence in the administration

[–] pomi@feddit.de 5 points 1 year ago* (last edited 1 year ago)

lemmy.ml just migrated to bare metal https://lemmy.ml/post/1234235