SLRPNK

1,221 readers
143 users here now

What is Solarpunk?

A SolarPunk Manifesto

Basic Rules:

For any community related question or to just test some function: !meta@slrpnk.net

Try our Photon & Alexandrite frontends.

Or try our lightweight UI and Voyager mobile UI.

All accounts also work with XMPP chat automatically incl. our Movim client.

If you need to jointly brainstorm on your next Solarpunk text, try our Etherpad.

And don't miss our Wiki.

founded 2 years ago
ADMINS

Where solarpunks organize for a better world!

1
33
submitted 1 week ago* (last edited 1 week ago) by ProdigalFrog to c/meta
 
 

Introduction

Another month has passed, which means the old Meta Community Discussion will be replaced with a new Community Discussion, which gives an update on the happenings of the instance, and to provide a place to talk and comment your thoughts on the instance, or for anything else that doesn't warrant its own c/Meta post.

Now, let us have a quiet moment in remembrance of the June discussion thread as we release it from its pinned status, back into the wilds of the fediverse.

...Right, that's quite enough remembering.

Slaps server

Let's bust out July.

Feddit.de is limping! Long live Feddit.org!

A few months ago, Feddit.de suffered a partial error, while the main admin was unavailable due to longer work related travel. As the Feddit.de admin is still missing, several of the site's moderators collaborated with the Fediverse.foundation and last week, Feddit.org was launched to cover many of the same communities as the old Feddit.de hosted. This is an inspiring development, and demonstrates one way a federated social network can respond to damage.

Community Highlights

We had some new communities pop up on the server last month!

  • !competenceporn@slrpnk.net, created by @Emotet, who, I would like to point out, has a really cool animated avatar and profile background. Their community is focused on competent people doing things competently in media.

  • !electricvehicles@slrpnk.net, created by @sabreW4K3, which focuses on news and discussions on EV's.

  • !ancienthistory@slrpnk.net, created by @reallykindasort. A place to discuss and share history of ancient peoples!

In other news, @countrypunk is looking for new moderators for !nature_spirituality@slrpnk.net. If that topic interests you, why not throw them a message? I'm sure they'd appreciate the help!

We'd also like to thank @JacobCoffinWrites for graciously taking over as moderator for !utilitycycling@slrpnk.net, and to @Midnight for becoming part of the !collapse@slrpnk.net moderation team! Good stuff y'all :)

If you're already a mod, joining the SLRPNK XMPP chat is a great way to grow your moderation team and share tips with other moderators. (it's also usable by all members of the instance, and you can use it for non-solarpunk chatrooms too, it's totally federated!)

If you're not already a mod, and see a community that you would like to support with moderation, don't hesitate to contact the current mods, or us admins.

Some examples of abandoned communities in need of new moderators are:

Again, if any of those seem like your bag, slap a comment down below, and we'll mod you up! Having those extra set of eyes and developing your community into whatever your creative vision holds is not only a tremendous help to us admins, It's quite fulfilling in its own right! :D

Technical Updates

Last month we updated our instance to Lemmy version 0.19.4/5, which brought some minor technical issues that were mostly fixed in the latter release. One major change was the introduction of an image proxy. This means that images in newly added posts (and new user/community avatars) are no longer directly downloaded from the federated servers, but rather first mirrored on our server. This has some privacy advantages and should also improve site loading speed a bit. We will have to see how this will affect our image storage capacity in the medium term, but so far it doesn't seem to have had a big impact and with our recent server upgrade, we do have quite a lot of storage space.

We also experimentally increased maximum upload size to 10mb and allowed small video files as well. We need to see how the impact of this is, though. If it results in too much bandwidth and/or storage space use, we might have to reconsider this.

Open Discussion

Aaaand... Yeah, that's the news for July. It was pretty chill, pretty snazzy, even! But now it's your turn to share whatever snazziness is happening on your mind. Anything related to the instance, the fediverse, or the server you'd like to ask about or discuss? Create a new community you want everyone to know about? Then mosey on down to the comments, my dude!

Everything you post here will be highlighted until the start of next month, which is like, a really long time (until it isn't. Time is cruel, cruel mistress).-

2
 
 

Lemmy just reached a new milestone: 1 million posts, across 1,323 servers.

Source: https://lemmy.fediverse.observer/dailystats&days=90

3
 
 

I'm tired about reading about reddit here.

We left. Let's move on.

4
 
 

Please note this is just a beta and there are going to be bugs, but it works and it works nicely. Have fun.

5
 
 
6
7
 
 
8
 
 

AccidentalRenaissance has no active moderators due to Reddit's unprecedented API changes, and has thus been privated to prevent vandalism.

Resignation letters:

Openminded_Skeptic - https://imgur.com/a/WwzQcac

VoltasPistol - https://imgur.com/a/lnHSM4n

We welcome you to join us in our new homes:

https://kbin.social/m/AccidentalRenaissance

https://lemmy.blahaj.zone/c/accidentalrenaissance

Thank you for all your support!

Original post from r/ModCoord

9
 
 

I hate that everything now is a subscription service instead of buying it and do whatever you want.

10
 
 
11
3837
submitted 1 year ago* (last edited 1 year ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world
 
 

Looks like it works.

Edit still see some performance issues. Needs more troubleshooting

Update: Registrations re-opened We encountered a bug where people could not log in, see https://github.com/LemmyNet/lemmy/issues/3422#issuecomment-1616112264 . As a workaround we opened registrations.

Thanks

First of all, I would like to thank the Lemmy.world team and the 2 admins of other servers @stanford@discuss.as200950.com and @sunaurus@lemm.ee for their help! We did some thorough troubleshooting to get this working!

The upgrade

The upgrade itself isn't too hard. Create a backup, and then change the image names in the docker-compose.yml and restart.

But, like the first 2 tries, after a few minutes the site started getting slow until it stopped responding. Then the troubleshooting started.

The solutions

What I had noticed previously, is that the lemmy container could reach around 1500% CPU usage, above that the site got slow. Which is weird, because the server has 64 threads, so 6400% should be the max. So we tried what @sunaurus@lemm.ee had suggested before: we created extra lemmy containers to spread the load. (And extra lemmy-ui containers). And used nginx to load balance between them.

Et voilà. That seems to work.

Also, as suggested by him, we start the lemmy containers with the scheduler disabled, and have 1 extra lemmy running with the scheduler enabled, unused for other stuff.

There will be room for improvement, and probably new bugs, but we're very happy lemmy.world is now at 0.18.1-rc. This fixes a lot of bugs.

12
 
 
13
3645
Lemmy World outages (lemmy.world)
submitted 11 months ago* (last edited 11 months ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world
 
 

Hello there!

It has been a while since our last update, but it's about time to address the elephant in the room: downtimes. Lemmy.World has been having multiple downtimes a day for quite a while now. And we want to take the time to address some of the concerns and misconceptions that have been spread in chatrooms, memes and various comments in Lemmy communities.

So let's go over some of these misconceptions together.

"Lemmy.World is too big and that is bad for the fediverse".

While one thing is true, we are the biggest Lemmy instance, we are far from the biggest in the Fediverse. If you want actual numbers you can have a look here: https://fedidb.org/network

The entire Lemmy fediverse is still in its infancy and even though we don't like to compare ourselves to Reddit it gives you something comparable. The entire amount of Lemmy users on all instances combined is currently 444,876 which is still nothing compared to a medium sized subreddit. There are some points that can be made that it is better to spread the load of users and communities across other instances, but let us make it clear that this is not a technical problem.

And even in a decentralised system, there will always be bigger and smaller blocks within; such would be the nature of any platform looking to be shaped by its members. 

"Lemmy.World should close down registrations"

Lemmy.World is being linked in a number of Reddit subreddits and in Lemmy apps. Imagine if new users land here and they have no way to sign up. We have to assume that most new users have no information on how the Fediverse works and making them read a full page of what's what would scare a lot of those people off. They probably wouldn't even take the time to read why registrations would be closed, move on and not join the Fediverse at all. What we want to do, however, is inform the users before they sign up, without closing registrations. The option is already built into Lemmy but only available on Lemmy.ml - so a ticket was created with the development team to make these available to other instance Admins. Here is the post on Lemmy Github.

Which brings us to the third point:

"Lemmy.World can not handle the load, that's why the server is down all the time"

This is simply not true. There are no financial issues to upgrade the hardware, should that be required; but that is not the solution to this problem.

The problem is that for a couple of hours every day we are under a DDOS attack. It's a never-ending game of whack-a-mole where we close one attack vector and they'll start using another one. Without going too much into detail and expose too much, there are some very 'expensive' sql queries in Lemmy - actions or features that take up seconds instead of milliseconds to execute. And by by executing them by the thousand a minute you can overload the database server.

So who is attacking us? One thing that is clear is that those responsible of these attacks know the ins and outs of Lemmy. They know which database requests are the most taxing and they are always quick to find another as soon as we close one off. That's one of the only things we know for sure about our attackers. Being the biggest instance and having defederated with a couple of instances has made us a target.  

"Why do they need another sysop who works for free"

Everyone involved with LW works as a volunteer. The money that is donated goes to operational costs only - so hardware and infrastructure. And while we understand that working as a volunteer is not for everyone, nobody is forcing anyone to do anything. As a volunteer you decide how much of your free time you are willing to spend on this project, a service that is also being provided for free.

We will leave this thread pinned locally for a while and we will try to reply to genuine questions or concerns as soon as we can.

14
 
 

I strongly encourage instance admins to defederate from Facebook/Threads/Meta.

They aren't some new, bright-eyed group with no track record. They're a borderline Machiavellian megacorporation with a long and continuing history of extremely hostile actions:

  • Helping enhance genocides in countries
  • Openly and willingly taking part in political manipulation (see Cambridge Analytica)
  • Actively have campaigned against net neutrality and attempted to make "facebook" most of the internet for members of countries with weaker internet infra - directly contributing to their amplification of genocide (see the genocide link for info)
  • Using their users as non-consenting subjects to psychological experiments.
  • Absolutely ludicrous invasions of privacy - even if they aren't able to do this directly to the Fediverse, it illustrates their attitude.
  • Even now, they're on-record of attempting to get instance admins to do backdoor discussions and sign NDAs.

Yes, I know one of the Mastodon folks have said they're not worried. Frankly, I think they're being laughably naive >.<. Facebook/Meta - and Instagram's CEO - might say pretty words - but words are cheap and from a known-hostile entity like Meta/Facebook they are almost certainly just a manipulation strategy.

In my view, they should be discarded as entirely irrelevant, or viewed as deliberate lies, given their continued atrocious behaviour and open manipulation of vast swathes of the population.

Facebook have large amounts of experience on how to attack and astroturf social media communities - hell I would be very unsurprised if they are already doing it, but it's difficult to say without solid evidence ^.^

Why should we believe anything they say, ever? Why should we believe they aren't just trying to destroy a competitor before it gets going properly, or worse, turn it into yet another arm of their sprawling network of services, via Embrace, Extend, Extinguish - or perhaps Embrace, Extend, Consume would be a better term in this case?

When will we ever learn that openly-manipulative, openly-assimilationist corporations need to be shoved out before they can gain any foothold and subsume our network and relegate it to the annals of history?

I've seen plenty of arguments claiming that it's "anti-open-source" to defederate, or that it means we aren't "resilient", which is wrong ^.^:

  • Open source isn't about blindly trusting every organisation that participates in a network, especially not one which is known-hostile. Threads can start their own ActivityPub network if they really want or implement the protocol for themselves. It doesn't mean we lose the right to kick them out of most - or all - of our instances ^.^.
  • Defederation is part of how the fediverse is resilient. It is the immune system of the network against hostile actors (it can be used in other ways, too, of course). Facebook, I think, is a textbook example of a hostile actor, and has such an unimaginably bad record that anything they say should be treated as a form of manipulation.

Edit 1 - Some More Arguments

In this thread, I've seen some more arguments about Meta/FB federation:

  • Defederation doesn't stop them from receiving our public content:
    • This is true, but very incomplete. The content you post is public, but what Meta/Facebook is really after is having their users interact with content. Defederation prevents this.
  • Federation will attract more users:
    • Only if Threads makes it trivial to move/make accounts on other instances, and makes the fact it's a federation clear to the users, and doesn't end up hosting most communities by sheer mass or outright manipulation.
    • Given that Threads as a platform is not open source - you can't host your own "Threads Server" instance - and presumably their app only works with the Threads Server that they run - this is very unlikely. Unless they also make Threads a Mastodon/Calckey/KBin/etc. client.
    • Therefore, their app is probably intending to make itself their user's primary interaction method for the Fediverse, while also making sure that any attempt to migrate off is met with unfamiliar interfaces because no-one else can host a server that can interface with it.
    • Ergo, they want to strongly incentivize people to stay within their walled garden version of the Fediverse by ensuring the rest remains unfamiliar - breaking the momentum of the current movement towards it. ^.^
  • We just need to create "better" front ends:
    • This is a good long-term strategy, because of the cycle of enshittification.
    • Facebook/Meta has far more resources than us to improve the "slickness" of their clients at this time. Until the fediverse grows more, and while they aren't yet under immediate pressure to make their app profitable via enshittification and advertising, we won't manage >.<
    • This also assumes that Facebook/Meta won't engage in efforts to make this harder e.g. Embrace, Extend, Extinguish/Consume, or social manipulation attempts.
    • Therefore we should defederate and still keep working on making improvements. This strategy of "better clients" is only viable in combination with defederation.

PART 2 (post got too long!)

15
 
 

Lemmy.ml has now blocked Threads.net

16
3510
Winning is relative (sh.itjust.works)
submitted 11 months ago by bpeu@sh.itjust.works to c/memes@lemmy.ml
 
 
17
 
 

Edit: obligatory explanation (thanks mods for squaring me away)...

What you see via the UI isn't "all that exists". Unlike Reddit, where everything is a black box, there are a lot more eyeballs who can see "under the hood". Any instance admin, proper or rogue, gets a ton of information that users won't normally see. The attached example demonstrates that while users will only see upvote/downvote tallies, admins can see who actually performed those actions.

Edit: To clarify, not just YOUR instance admin gets this info. This is ANY instance admin across the Fediverse.

18
19
 
 
20
 
 
21
3348
submitted 1 year ago* (last edited 1 year ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world
 
 

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@phiresky@lemmy.world did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @cetra3@lemmy.ml created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~proxy_next_upstream timeout;~~ max_fails=5 in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~proxy_next_upstream timeout;~~ max_fails=5 workaround but for now it seems to hold with 1.

Thanks to @phiresky@lemmy.world , @cetra3@lemmy.ml , @stanford@discuss.as200950.com, @db0@lemmy.dbzer0.com , @jelloeater85@lemmy.world , @TragicNotCute@lemmy.world for their help!

And not to forget, thanks to @nutomic@lemmy.ml and @dessalines@lemmy.ml for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the proxy_next_upstream timeout setting didn't work (or I didn't set it properly) so I used max_fails=5 for each upstream, that does actually work.

22
 
 
23
 
 

Welcome to the fediverse!

24
3286
F#€k $pez (lemmy.ml)
submitted 7 months ago by Grayox@lemmy.ml to c/memes@lemmy.ml
 
 
25
3270
submitted 1 year ago* (last edited 1 year ago) by eco@lemmy.world to c/syncforlemmy@lemmy.world
 
 

@ljdawson shared on Discord

view more: next ›