this post was submitted on 12 Jun 2023
405 points (100.0% liked)
Lemmy.World Announcements
29077 readers
165 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world/
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, that's exactly why I'm asking this question. All the effort seems to be going into the DB -- but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.
I didn't see any tickets about this on the GitHub, which is why I'm asking around to see if there's actually some very low-hanging-fruit for improving all the instances with a frontend RAM cache.
Much of your post seemed to focus on the techniques employed by
lemmy.world
, caching websocket responses in the web-proxy does not seem to prominently feature among those techniques.If you're interested in advancing the state of the discussion around web-proxy caching, I'd consider standing up an instance to experiment with it and report your own findings. You wouldn't necessarily have to take on the ongoing expense and moderation headache of a public instance, you could set up with new user registrations closed, create your own test users, and write a small load generator powered by https://join-lemmy.org/api/ to investigate the effect of caching common API queries.