BigMuffin69

joined 10 months ago
[–] BigMuffin69@awful.systems 20 points 5 months ago

If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.

i have found I get .000000000006% less hallucination rate by throwing alphabet soup at the wall instead of spaghett, my preprint is on arXiV

[–] BigMuffin69@awful.systems 18 points 5 months ago

THIS IS NOT A DRILL. I HAVE EVIDENCE YANN IS ENGAGING IN ACASUAL TRADE WITH THE ROBO GOD.

[–] BigMuffin69@awful.systems 23 points 5 months ago* (last edited 5 months ago) (3 children)

Found in the wilds^

Giganto brain AI safety 'scientist'

If AIs are conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: AIs are definitely not conscious.

Internet rando:

If furniture is conscious right now, we are monsters. Nobody wants to think they're monsters. Ergo: Furniture is definitely not conscious.

[–] BigMuffin69@awful.systems 22 points 5 months ago (10 children)

https://xcancel.com/AISafetyMemes/status/1802894899022533034#m

The same pundits have been saying "deep learning is hitting a wall" for a DECADE. Why do they have ANY credibility left? Wrong, wrong, wrong. Year after year after year. Like all professional pundits, they pound their fist on the table and confidently declare AGI IS DEFINITELY FAR OFF and people breathe a sigh of relief. Because to admit that AGI might be soon is SCARY. Or it should be, because it represents MASSIVE uncertainty. AGI is our final invention. You have to acknowledge the world as we know it will end, for better or worse. Your 20 year plans up in smoke. Learning a language for no reason. Preparing for a career that won't exist. Raising kids who might just... suddenly die. Because we invited aliens with superior technology we couldn't control. Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banks' Culture series as a good outcome... where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME. What's funny, too, is that noted skeptics like Gary Marcus still think there's a 35% chance of AGI in the next 12 years - that is still HIGH! (Side note: many skeptics are butthurt they wasted their career on the wrong ML paradigm.) Nobody wants to stare in the face the fact that 1) the average AI scientist thinks there is a 1 in 6 chance we're all about to die, or that 2) most AGI company insiders now think AGI is 2-5 years away. It is insane that this isn't the only thing on the news right now. So... we stay in our hopium dens, nitpicking The Latest Thing AI Still Can't Do, missing forests from trees, underreacting to the clear-as-day exponential. Most insiders agree: the alien ships are now visible in the sky, and we don't know if they're going to cure cancer or exterminate us. Be brave. Stare AGI in the face.

This post almost made me crash my self-driving car.

[–] BigMuffin69@awful.systems 15 points 5 months ago* (last edited 5 months ago)

did somebody troll him by saying ‘we will just make the LLM not make paperclips bro?’

rofl, I cannot even begin to fathom all the 2010 era LW posts where peeps were like, "we will just tell the AI to be nice to us uwu" and Yud and his ilk were like "NO DUMMY THAT WOULDNT WORK B.C. X Y Z ." Fast fwd to 2024, the best example we have of an "AI system" turns out to be the blandest, milquetoast yes-man entity due to RLHF (aka, just tell the AI to be nice bruv strat). Worst of all for the rats, no examples of goal seeking behavior or instrumental convergence. It's almost like the future they conceived on their little blogging site shares very little in common with the real world.

If I were Yud, the best way to salvage this massive L would be to say "back in the day, we could not conceive that you could create a chat bot that was good enough to fool people with its output by compressing the entire internet into what is essentially a massive interpolative database, but ultimately, these systems have very little do with the sort of agentic intelligence that we foresee."

But this fucking paragraph:

(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered “much more likely” while still being not likely enough.)

ah, the sweet, sweet aroma of absolute copium. Don't believe your eyes and ears people, LLMs have everything to do with AGI and there is a smol bean demon inside the LLMs that is catastrophically misaligned with human values that will soon explode into the super intelligent lizard god the prophets have warned about.

[–] BigMuffin69@awful.systems 11 points 5 months ago* (last edited 5 months ago)

my b lads, I corrected it

[–] BigMuffin69@awful.systems 17 points 5 months ago* (last edited 5 months ago) (1 children)

I'm getting a tramp stamp that says "Remember the Markov Monkey Fallacy"

[–] BigMuffin69@awful.systems 11 points 5 months ago* (last edited 5 months ago) (2 children)

And the number of angels that can dance on the head of a pin? 9/11

[–] BigMuffin69@awful.systems 3 points 5 months ago

You know for a blog that's on its face about computational complexity, you'd think Scott would show a little more skepticism to the tech bro saying "all we need is 14 quintillion x compute to solve the Riemann hypothesis"

view more: ‹ prev next ›