this post was submitted on 19 Jun 2023
3 points (100.0% liked)

Lemmy

12541 readers
42 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

And I guess this question is two parts: 1. Regarding the current lemmy implementation, and 2. The activityPub protocol in general

top 4 comments
sorted by: hot top controversial new old
[–] 0xCAFE@feddit.de 2 points 1 year ago* (last edited 1 year ago) (1 children)

Speaking of scale only, bigger instances are certainly better. More and smaller instances increase the coordination overhead significantly (remember that your instance saves and serves a copy of any remote post. In the extreme case this means every server needs to have a copy of all other servers. Also, the more instances, the more peers each server has to ask for an update.

Many small instances have other benefits though, among them higher resillience and independence.

[–] UrbenLegend@lemmy.ml 1 points 1 year ago

At the same time, don't smaller instances mean that they generally are making less copies of remote posts? Fewer users means that they'll only be subscribed / viewing a few posts.

[–] PriorProject@lemmy.world 2 points 1 year ago

Every complex system (and federated systems like Lemmy qualify) has more than one potential bottleneck that can become a problem in different conditions.

  • Right now, the common performance bottleneck for Lemmy instances is heavy database reads caused by users browsing. Many of these queries are written inefficiently and can be optimized, there are things that can be done in Postgres to scale as well. But browse traffic is one kind of workload load that can reach limits, and it gets stressed when lots users are active on one big instance.
  • Federated networks CAN experience federated replication load when there are lots of instances to deliver federation messages to. If I comment on this post, and the server hosting the community has to deliver the comment to (pinky to mouth) one million instances... that's a different kind of workload and it gets stressed when there are lots of different instances subscribed to a single community.

The goldilocks zone is where there is a medium number of medium sized instances. Then each federation message can efficiently power browse traffic for a lot of users, and no one instance gets overwhelmed with browse traffic.

In practice, this is not how networks organize. There will both be instances that are "too large" and also lots of small instances. Right now, the Lemmy network is small and federation traffic is not a meaningful bottleneck. Browse traffic is, and that's what the devs are working on. But with time, the limits of both these things can be pushed further out improving scalability of the etwork in both directions.

[–] fubo@lemmy.world 1 points 1 year ago

It's a great question! To know this, we'd need to look into not just what the ActivityPub protocol says, but also exactly how the code base implements it, and how the server actually performs on the computers it's deployed on.

We might look specifically at:

  1. How many requests (and of what types) does a typical end user send to their local instance?
  2. How many requests (etc.) does an instance send to its peer instances?
  3. How is (2) controlled by the number of subscriptions, posts, or other variables?
  4. How does instance performance respond to different kinds of request load?
  5. How have instance operators tuned the Lemmy server or its backends to manage different loads?

Because ActivityPub is built on HTTP, different types of request are expressed as different URL paths and HTTP methods. This should make it straightforward to characterize the servers' behavior under different kinds of load (e.g. lots of local posts; lots of remote posts; lots of new user accounts; etc.)

Doing this "for serious" as an engineering project would require some amount of testing infrastructure, e.g. the ability to replay various kinds of traffic against a Lemmy server while monitoring its performance.