Nothing4You

joined 8 months ago
[–] Nothing4You@programming.dev 43 points 5 months ago* (last edited 5 months ago) (5 children)

it should be noted that these bans are community bans, not instance bans. your title makes it look like people are getting instance banned from lemmy.world, while the examples you've shown are about community bans.

if i'm not mistaken, several/most of the lemmy.ml bans/ban complaints have been about instance bans, which affect all communities on the instance.

[–] Nothing4You@programming.dev 2 points 5 months ago (2 children)

except instance A will actively reject such content from B users when it hears about it from C.

generally it should be expected not to see any new content from B, but historic content will still exist and basically be in a frozen state.

[–] Nothing4You@programming.dev 2 points 5 months ago

the main problem is still that reports are not reliably getting to remote moderators: https://github.com/LemmyNet/lemmy/issues/4744

other than that it should be working.

[–] Nothing4You@programming.dev 10 points 5 months ago

It should be noted that the (visibility of) community bans are a result of better enforcement of site bans in 0.19.4, which for now is implemented by sending out community bans for local communities when a user gets instance banned: https://github.com/LemmyNet/lemmy/pull/4464

Prior to this, when a user got instance banned from .ml, they were also implicitly banned from .ml communities, but this was only known to the instance they were banned on. As a result, users were still able to post, comment, and vote on those communities, but it would be visible only on that user's instance, not federated anywhere else. Visibility of this ban was exclusively on the banning instance's modlog.

fyi @SpaceCadet@feddit.nl

[–] Nothing4You@programming.dev 4 points 5 months ago

I've submitted a PR to fix this, it might still make it into 0.19.4.

fyi @DABDA@lemm.ee

[–] Nothing4You@programming.dev 2 points 5 months ago (1 children)

curious, what do you mean by checking them?

[–] Nothing4You@programming.dev 2 points 5 months ago (5 children)

if you open https://lemmy.world/api/v3/user/unread_count after being logged in, it should at least tell you what kind of unread message it is.

with that information it can probably be narrowed down a bit.

i don't think this is related to an inconsistency with blocked users, as that is only being fixed in 0.19.4 or 0.19.5: https://github.com/LemmyNet/lemmy/issues/4227

moderated or deleted comments as mentioned by others don't look like they would be the case when i'm looking at the 0.19.3 code.

the bot reply mentioned by @DABDA@lemm.ee seems like a very plausible explanation, as bot accounts are hidden from the comment reply list in the api, but they're not currently excluded from the notification count.

i'll have a look at whether that is still the case in the current development version in a bit and submit a pr to fix that if it is.

[–] Nothing4You@programming.dev 3 points 6 months ago

starting with 0.19.4, at least user settings will default to their browser's accepted languages on registration: https://github.com/LemmyNet/lemmy/pull/4550

this doesn't solve actually tagging content, but it some progress at least.

[–] Nothing4You@programming.dev 3 points 6 months ago

lemmy's current federation implementation works with a sending queue, so it stores a list of activities to be sent in its database. there is a worker running for each linked instance checking if an activity should be sent to that instance, and if it should, then send it. due to how this is currently implemented, this is always only sending a single activity at a time, waiting for this activity to be successfully sent (or rejected), then sending the next one.

an activity is any federation message when an instance informs another instance about something happening. this includes posts, comments, votes, reports, private messages, moderation actions, and a few others.

let's assume an activity is generated on lemmy.world every second. now every second this worker will send this activity from helsinki to sydney and wait for the response, then wait for the next activity to be available. to simplify things, i'll skip processing time in this example and just work with raw latency, based on the number you provided. now lemmy.world has to send an activity to sydney. this takes approximately 160ms. aussie.zone immediately responds, which takes 160ms for the response to get back to helsinki. in sum this means the entire process took 320ms. as long as only one activity is generated per second, this is easy to keep up with. still assuming there is no other time needed for any processing, this means about 3.125 activities can be transmitted from lemmy.world to aussie.zone on average.

the real activity generation rate on lemmy.world is quite a bit higher than 3.125 activities per second, and in reality there are also other things that take up some time during this process. over the last 7 days, lemmy.world had an average activity generation rate of about 5.45 activities per second. it is important to note here that not all activities generated on an instance will be sent to all other linked instance, so this isn't a reliable number of how many activities are actually supposed to be sent to aussie.zone every second, rather an upper limit. for example, for content in a community, lemmy will only send these activities to other instances that have at least one subscriber on the remote instance. although only a fraction of the activities, private messages are another example of an activity that is only sent to a single linked instance.

to answer the original question: the week of delay is simply built up over time, as the amount of lag just keeps growing.

additionally, lemmy also discards its queued activities that are older than a week once a week, so if you go over 7 days of lag for too long you will start completely missing activities that were over the limit. as previously explained, this can be any kind of federated content. it can be posts, comments, votes, which are usually not that important, but it can also affect private messages, which are then just lost without the sender ever knowing.

[–] Nothing4You@programming.dev 4 points 6 months ago

it's open source: https://github.com/Nothing4You/activitypub-federation-queue-batcher

I strongly recommend fully understanding how it works, which failure scenarios there are and how to recover from them before deploying it in production though. not all of this is currently documented, a lot of it has just been in matrix discussions.

I also have a script to prefetch posts and comments from remote communities before they'd get through via federation, which would make them appear without votes at least, and slightly improve processing speed while they're coming in through regular federation. this also doesn't require any additional privileges or being in a position to intercept traffic. it is however also not enough to catch up and stay caught up.
this script is not open source currently. while it's fairly simple and straightforward, i just didn't bother cleaning it up for publishing, as it's currently still partially integrated in an unrelated tool.
I previously tried offering to deploy this on matrix but one of my attempts to open a conversation was rejected and the other one never got accepted.

[–] Nothing4You@programming.dev 9 points 6 months ago (3 children)

yes, that's about the second best option for the time being.

it's currently used by reddthat.com and lemmy.nz.

disclaimer: i wrote that software.

[–] Nothing4You@programming.dev 7 points 6 months ago

https://github.com/LemmyNet/lemmy/pull/4623 is on the 0.19.5 milestone, until parallel sending is implemented there won't be any benefit from parallel receiving.

0.19.4 will already have some improved logic for backgrounding some parts of the receiving logic to speed that up a little, but that won't be enough to deal with this.

view more: ‹ prev next ›