this post was submitted on 17 Jun 2023
7 points (88.9% liked)

Lemmy

12535 readers
3 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

I like the idea with Lemmy/kbin and the fediverse but theres something I dont understand perhaps.

If in the future Lemmy is very popular and someone wants to add their own server and federate with everyone then from that moment that new instance will get all new comments, posts, etc. from all other instances its federated with and must save them in its db. This means if Lemmy gets popular forget about little guys helping out spread the “load” because every intance still must take and save all new data. Thats a lot of processing power and storage. How can this work? I see in the future only a few instances will survive.

If somehow each instance was a node and only took care of its posts and comments and forward them to others upon request I can understand scaling but this is not how it works AFAIK. Another way would be with consensus algorithms where a node saves more thsn its own data but still not all.

you are viewing a single comment's thread
view the rest of the comments
[–] maegul@lemmy.ml 1 points 1 year ago (2 children)

Yea, I think you’re right. Once any instance has enough users with enough interests and subscriptions to enough communities, you get a scenario where a good portion of the whole network is duplicated on every or many nodes of the whole network. This is how the fediverse works, and I’ve yet seen anyone seriously address what this looks like at large scales and long timelines.

Storage space isn’t too expensive I guess, so maybe it’s something we can just solve when we come to it.

But, the problem may be worse with threadiverse platforms (lemmy/kbin and any other grouped or threaded platform) for exactly the reason you highlight … the whole community and all of its discussions get duplicated. For microblogging platforms, things are more granular as it’s only single posts by people who are followed that duplicated.

It may not be fatal and may be something we can solve when we get, which makes sense as getting up to a significant scale of users is tough in its own right … but it’d sure be nice to see someone think through the numbers.

[–] MentalEdge@sopuli.xyz 1 points 1 year ago

This is literally how the entire internet works. You are describing CDNs.

[–] HelloLemmySup@sh.itjust.works 0 points 1 year ago

That's why in my mind something like a consensus algorithm with the data duplicated N times where N < number of instances with subscribed people would make more sense. As it is right now I can't see it scaling pass the few instances that can afford to keep it running.