this post was submitted on 05 Dec 2024
13 points (100.0% liked)

SneerClub

1010 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] blakestacey@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Abstract: This paper presents some of the initial empirical findings from a larger forth-coming study about Effective Altruism (EA). The purpose of presenting these findings disarticulated from the main study is to address a common misunderstanding in the public and academic consciousness about EA, recently pushed to the fore with the publication of EA movement co-founder Will MacAskill’s latest book, What We Owe the Future (WWOTF). Most people in the general public, media, and academia believe EA focuses on reducing global poverty through effective giving, and are struggling to understand EA’s seemingly sudden embrace of ‘longtermism’, futurism, artificial intelligence (AI), biotechnology, and ‘x-risk’ reduction. However, this agenda has been present in EA since its inception, where it was hidden in plain sight. From the very beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk, now lumped under the banner of ‘longtermism’). The article’s aim is narrowly focused onpresenting rich qualitative data to make legible the distinction between public-facing EA and core EA.

[–] blakestacey@awful.systems 13 points 3 weeks ago (1 children)

From page 17:

Rather than encouraging critical thinking, in core EA the injunction to take unusual ideas seriously means taking one very specific set of unusual ideas seriously, and then providing increasingly convoluted philosophical justifications for why those particular ideas matter most.

ding ding ding

[–] dashdsrdash@awful.systems 3 points 2 weeks ago (1 children)

You must prove yourself in the Outer Circle before being granted leave to study the Inner Mysteries. Or at least attend the right parties.

[–] dgerard@awful.systems 1 points 2 weeks ago

* join the right polycules