this post was submitted on 07 Mar 2024
202 points (98.1% liked)

/0

1560 readers
2 users here now

Meta community. Discuss about this lemmy instance or lemmy in general.

Service Uptime view

founded 1 year ago
MODERATORS
 

Follow-up from https://lemmy.dbzer0.com/post/15792108

I've spent a ton of hours trying to troubleshoot why lemmy.dbzer0.com is falling behind lemmy.world in federation. We have a good idea where it's going wrong, but we have no idea why.

The problem is that once I receive an apub sync from l.w and send the request to the lemmy backend, it takes about 1 second to process an apub sync, which is way too long (is should typically be < 100ms).

We had a look at the DB and it was somewhat slow due to syncing commits to disk. OK we disabled that, and now it's much faster (and less safe but whatever) but the sync times have not improved at all.

I've also made a lot of tests to ensure the problem is not coming from my loadbalancers and I am certain I've removed them from the equation. The issue is somewhere within the lemmy docker stuff and/or the postgresql DB.

Unfortunately I'm relying solely on other admins help on matrix, and at this point I'm being asked to recompile lemmy from scratch or deploy my own docker container with more debug instructions. Neither of these is within my skillset, so I'm struggling to make progress on this.

In the meantime we're falling further and further behind in the lemmy.world federation queue, (along with a lot of other instances). To clarify, the problem is not lemmy.world. It takes my instance the same time to receive apub syncs from every other server. It's just that the other servers don't have as much traffic so 1/s is enough to keep up. But lemmy.world has so much constant changes, 1/s is not nearly fast enough.

I'm continuing to dig on this as much as I can. But I won't lie that I could use some help.

I'll keep you all updated in this thread.

you are viewing a single comment's thread
view the rest of the comments
[–] db0@lemmy.dbzer0.com 103 points 8 months ago* (last edited 8 months ago) (13 children)

Final Update: Problem has been resolved by migrating my lemmy backend. We are currently catching up to lemmy.world, which should probably take all the next day. But the "distance" is reducing by hundreds of syncs per minute.

I will write a post-mortem soon

[–] 5714@lemmy.dbzer0.com 22 points 8 months ago (2 children)

HACKERPERSON-Energy. Hope you feel relieved now.

[–] db0@lemmy.dbzer0.com 26 points 8 months ago (1 children)
[–] SlyLycan@lemmy.dbzer0.com 9 points 8 months ago

Appreciate everything you do! Thanks

load more comments (10 replies)