spaduf

joined 1 year ago
MODERATOR OF
[–] spaduf 3 points 7 months ago (1 children)

Having moderation work in an expected and consistent way is hardly the same thing as moderation tooling.

[–] spaduf 1 points 7 months ago

Love the idea is a "similar communities" button but I don't know if I'd say searching communities is really that hard.

[–] spaduf 2 points 7 months ago

I think there's some pretty low hanging fruit here, but most of it is platform specific.

On Mastodon, I think this looks like a revamp of the for you section. As it stands the posts are mostly human curated and the people section is mostly a static list. That is, if you've scrolled through the list once, that list will not be different next time unless you've followed a significant number of people outside of it. It would be nice if it at least showed you the next 20 or so by the same metric.

On lemmy, I think making the functionality provided by the trending communities community a first class feature would go a long way.

[–] spaduf 3 points 7 months ago* (last edited 7 months ago)

Unmanic to optimize your library in the background. Encoding things to x265 can buy you a huge amount of space.

Edit: Reading again i see that you're on a pi. Not at all sure what the video encoding performance is on those.

[–] spaduf 39 points 8 months ago (8 children)

Seems to me the most likely explanation is they got caught and fixed it.

[–] spaduf 2 points 8 months ago

#BlackMastodon is a thing. I think there's also a guppe group?

[–] spaduf 8 points 8 months ago (4 children)

Shouldn't we blame this on the food monopolies rather than grocery stores?

[–] spaduf 1 points 8 months ago

Well, I got laid off, so my priorities have shifted drastically. Otherwise, there were a few issues to getting going. At the time that I had started, there were some pretty major federation issues among servers running on different versions of Lemmy that remained unfixed until relatively recently. The other major barrier was the lack of a good solution for information hosting (in my mind this niche is best served by wikis), but there have been some recent developments in that area. Notably, SLRPNK now has a wiki and Ibis, a new federated wiki solution, has appeared. Before this, I had been working on a Lemmy bot that would more or less jam wiki functionality into the existing Lemmy frontend, but now I think I am probably better off using one of the previously mentioned solutions. My work situation may change soon, in which case, I will pick this project back up. Alternatively, if someone else wanted to be a major contributor, a split work load would probably also allow me to pick this back up.

[–] spaduf 3 points 8 months ago* (last edited 8 months ago) (1 children)

I generally disagree with the analysis of the article. Particularly, I think that Gen Z men and women showing roughly the same divide in voting as older generations still constitutes a major shift. If it gets to the point that Gen Z has a greater divide than older generations, I would consider that an extreme result of this trend. Curious what y'all think?

[–] spaduf 1 points 8 months ago

I think there's some pretty interesting implications for a fediverse-first or distributed wiki.

[–] spaduf 1 points 8 months ago

before they could use the app

Reading comprehension's not your strong suit, eh?

[–] spaduf 25 points 8 months ago (1 children)

Regarding Sup: dansup has mentioned that he's put the project on hold until the new EU guidelines around interoperability (targeting whatsapp) are available.

5
submitted 10 months ago* (last edited 10 months ago) by spaduf to c/digitalcommunitybuilding
 

The project hopes to directly advance fediverse connectivity and curation by establishing easy to follow best practices for building a digital community from scratch. It is also meant to push the boundaries on current methods and help advocate for the building of institutional knowledge.

This community is very much an experiment, but it is also a place to experiment. With this in mind, if you are considering trying something new, please consider posting your ideas and results. To that end, much of this project will be composed of living documents that will change as we further develop this concept.

 

cross-posted from: https://slrpnk.net/post/5710029

Institution: Wikiversity
Lecturer: Boud Roukema
Subject: #physics #specialrelativity #generalrelativity
Description: Special relativity and steps towards general relativity is a one-semester Wikiversity course that uses the geometrical approach to understanding special relativity and presents a few elements towards general relativity. The course may be used in a traditional university, within the conditions of the free licensing terms indicated at the bottom of this Wikiversity web page. It may be modified and redistributed according to the same conditions, for example, via the Wikiversity and Wikimedia Commons web sites.

5
submitted 10 months ago by spaduf to c/autodidact
 

The courses are substantially more complete than typical OCW courses and include new custom-created content as well as materials repurposed from MIT classrooms. The materials are also arranged in logical sequences and include multimedia such as video and simulations.

 

cross-posted from: https://slrpnk.net/post/5710029

Institution: Wikiversity
Lecturer: Boud Roukema
Subject: #physics #specialrelativity #generalrelativity
Description: Special relativity and steps towards general relativity is a one-semester Wikiversity course that uses the geometrical approach to understanding special relativity and presents a few elements towards general relativity. The course may be used in a traditional university, within the conditions of the free licensing terms indicated at the bottom of this Wikiversity web page. It may be modified and redistributed according to the same conditions, for example, via the Wikiversity and Wikimedia Commons web sites.

7
submitted 10 months ago* (last edited 10 months ago) by spaduf to c/digitalcommunitybuilding
 

Please post your experiences as a Lemmy or Kbin moderator/admin, and I'll type them up into a guide that'll exist as a living document in the sidebar and on the wiki entitled "Building a Lemmy Community From Scratch". If applicable, please make note of what efforts did NOT work as well as what did.


A draft of the document will appear here after a minimum number of responses have been collected.

 

Institution: Wikiversity
Lecturer: Boud Roukema
Subject: #physics #specialrelativity #generalrelativity
Description: Special relativity and steps towards general relativity is a one-semester Wikiversity course that uses the geometrical approach to understanding special relativity and presents a few elements towards general relativity. The course may be used in a traditional university, within the conditions of the free licensing terms indicated at the bottom of this Wikiversity web page. It may be modified and redistributed according to the same conditions, for example, via the Wikiversity and Wikimedia Commons web sites.

4
submitted 10 months ago* (last edited 10 months ago) by spaduf to c/opencourselectures
 

Institution: Stanford
Lecturer: Fei-Fei Li, Justin Johnson, Serena Yeund
University Course Code: CS 231
Subject: #computervision #machinelearning


Description: Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.

 

cross-posted from: https://slrpnk.net/post/5501378

For folks who aren't sure how to interpret this, what we're looking at here is early work establishing an upper bound on the complexity of a problem that a model can handle based on its size. Research like this is absolutely essential for determining whether these absurdly large models are actually going to achieve the results people have already ascribed to them on any sort of consistent basis. Previous work on monosemanticity and superposition are relevant here, particularly with regards to unpacking where and when these errors will occur.

I've been thinking about this a lot with regards to how poorly defined the output space they're trying to achieve is. Currently we're trying to encode one or more human languages, logical/spatial reasoning (particularly for multimodal models), a variety of writing styles, and some set of arbitrary facts (to say nothing of the nuance associated with these facts). Just by making an informal order of magnitude argument I think we can quickly determine that a lot of the supposed capabilities of these massive models have strict theoretical limitations on their correctness.

This should, however, give one hope for more specialized models. Nearly every one of the above mentioned "skills" is small enough to fit into our largest models with absolute correctness. Where things get tough is when you fail to clearly define your output space and focus training so as to maximize the encoding efficiency for a given number of parameters.

 

For folks who aren't sure how to interpret this, what we're looking at here is early work establishing an upper bound on the complexity of a problem that a model can handle based on its size. Research like this is absolutely essential for determining whether these absurdly large models are actually going to achieve the results people have already ascribed to them on any sort of consistent basis. Previous work on monosemanticity and superposition are relevant here, particularly with regards to unpacking where and when these errors will occur.

I've been thinking about this a lot with regards to how poorly defined the output space they're trying to achieve is. Currently we're trying to encode one or more human languages, logical/spatial reasoning (particularly for multimodal models), a variety of writing styles, and some set of arbitrary facts (to say nothing of the nuance associated with these facts). Just by making an informal order of magnitude argument I think we can quickly determine that a lot of the supposed capabilities of these massive models have strict theoretical limitations on their correctness.

This should, however, give one hope for more specialized models. Nearly every one of the above mentioned "skills" is small enough to fit into our largest models with absolute correctness. Where things get tough is when you fail to clearly define your output space and focus training so as to maximize the encoding efficiency for a given number of parameters.

 

cross-posted from: https://mander.xyz/post/8095934

Looks like we're getting company!

view more: ‹ prev next ›