this post was submitted on 12 Mar 2024
1 points (100.0% liked)

TechTakes

1401 readers
184 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

As suggested at this thread to general "yeah sounds cool". Let's see if this goes anywhere.

Original inspiration:

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

If your sneer seems higher quality than you thought, feel free to make it a post, there's no quota here

top 11 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 1 points 8 months ago* (last edited 8 months ago)

So today I learned there are people who call themselves superforcasters®. Neat!

The superforecasters® have had a melding of the minds and determined that covid-19 was 75% likely to not be a lab leak. Nifty! This is useless to me!

Looking at the website of these people with good enough judgement to call themselves "Good Judgement", you can learn that 100% of superforecasters® agree that there will be less than 100 deaths from H5N1 this year. I don't know much about H5N1 but I guess that makes sense given that it's been around since 1996 and would need a mutation to be contagious among humans.

I found one of the superforecaster®-trainee discussion topics where they reveal some of the secrets to their (super)forecasting(®)-trainee instincts

I have used "Copilot" LLM AI to point me in the right direction. And to the point of the LLM they have been trained not to give a response about conflict as they say they are trying to permote peace instead of war using the LLM.

Riveting!

Let's go next to find out how to give up our individuality and become a certified superforecaster® hive brain.

To minimize the chance that outstanding accuracy resulted from luck rather than skill, we limited eligibility for GJP superforecaster status to those forecasters who participated in at least 50 forecasting questions during a tournament “season.”

Fans of certain shonen anime may recognize this technique as Kodoku -- a deadly poison created by putting a bunch of insects in a jar until only one remains:

100 species of insects were collected, the larger ones were snakes, the smaller ones were lice, Place them inside, let them eat each other, and keep what is left of the last species. If it is a snake, it is a serpent, if it is a louse, it is a louse. Do this and kill a person.


"But what's the catch Saturn"? I can hear you say. "Surely this is somehow a grift nerds find or a way to fleece money out of governments".

Nonono you've got the completely wrong idea. Good Judgement offers a 100$ Superforecasting Fundamentals course out of the goodness of their heart I'm sure! I mean after all if they spread Superforecasting to the world then their Hari-Seldon-Esque hivemind would lose it's competitive edge so they must not be profit motivated.

Anyway if you work for the UK they want to hear from you:

If you are a UK government entity interested in our services, contact us today.

Maybe they have superforecasted the fall of the british empire.


And to end this, because I can never resist web design sneer.

Dear programmers: if you apply the CSS word-break: break-all; to the string "Privacy Policy" it may end up rendered as "Pr[newline]ivacy Policy" which unfortunately looks pretty unprofessional :(

[–] saucerwizard@awful.systems 0 points 8 months ago (1 children)

I grabbed a book on the fermi paradox from the university library and it turned out to be full of Bolstrom and Sandberg x-risk stuff. I can’t even enjoy nerd things anymore.

[–] self@awful.systems 0 points 8 months ago (1 children)

it’s the actual fucking worst when the topics you’re researching get popular in TESCREAL circles, because all of the accessible sources past that point have a chance of being cult nonsense that wastes your time

I’ve been designing some hardware that speaks lambda calculus as a hobby project, and it’s frustrating when a lot of the research I’m reading for this is either thinly-veiled cult shit, a grift for grant dollars, or (most often) both. I’ve had to develop a mental filter to stop wasting my time on nonsensical sources:

  • do they make weird claims about Kolmogorov complexity? if so, they’ve been ingesting Ilya’s nonsense about LLMs being Kolmogorov complexity reducers and they’re trying to use a low Kolmogorov complexity lambda calculus representation to implement their machine god. discard this source.
  • do they cite a bunch of AI researchers, either modern or pre-winter? lambda calculus, lisp, and functional programming in general have a long history of being treated as the magic that’ll enable the machine god by AI researchers, and this is the exact low quality shit research that led to the AI winter in the first place. discard this source.
  • at any point do they casually claim that the Church-Turing correspondence has been disproven or that a lambda calculus machine is superturing? throw that crank shit in the trash where it belongs.

I think the worst part is having to emphasize that I’m not with these cult assholes when I occasionally talk about my hobby work — I’m not in it to make the revolutionary machine that’ll destroy the Turing orthodoxy or implement anyone’s machine god. what I’m making most likely won’t even be efficient for basic algorithms. the reason why I’m drawn to this work is because it’s fun to implement a machine whose language is a representation of pure math (that can easily be built up into an ML-like assembly language with not much tooling), and I really like how that representation lends itself to an HDL implementation.

[–] V0ldek@awful.systems 3 points 5 months ago

lisp (...) [has] a long history of being treated as the magic that’ll enable the machine god

Nonsense, we all know the robot god will be hacked together in Perl.

[–] self@awful.systems 0 points 8 months ago (3 children)

I learned about this because someone dragged a copy to aella’s birthday orgy and it showed up in one of the photos, but the rationalists have a cards against humanity clone and it looks godawful

[–] jonhendry@awful.systems 1 points 8 months ago

• came in a deck of cards

[–] sc_griffith@awful.systems 0 points 8 months ago (1 children)

imagine someone pulls this out and you have no idea what it is. you're kind of nervous and weirded out by the energy at this orgy but at least this will distract you. you look at your first card and it has a yudkowsky quote on it

[–] self@awful.systems 1 points 8 months ago

one of the data fluffers solemnly logs me as “did not finish” as I flee the orgy

[–] bitofhope@awful.systems 0 points 8 months ago (1 children)

Jesus wept, that one deserves a thread of its own. I can't remember the last time I winced this hard.

[–] self@awful.systems 1 points 8 months ago* (last edited 8 months ago)

dear fuck I found their card database, which doesn’t seem to be linked from their main page (and which managed to crash its tab as soon as I clicked on the link to see all the cards spread out, because lazy loading isn’t real):

e: somehow the cards get less funny the higher the funny rating goes

e2: there’s no punchability rating but it’s desperately needed

[–] dgerard@awful.systems 0 points 8 months ago

There isn't really a suitable awful.systems sub to put it in, but I thought I'd note here that Stonetoss got doxxed thoroughly just to increase the general good cheer and bonhomie