this post was submitted on 04 Jun 2024
28 points (100.0% liked)

SneerClub

983 readers
15 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 

this time in open letter format! that'll sure do it!

there are "risks", which they are definite about - the risks are not hypothetical, the risks are real! it's totes even had some acknowledgement in other places! totes real defs for sure this time guize

you are viewing a single comment's thread
view the rest of the comments
[–] Soyweiser@awful.systems 16 points 5 months ago* (last edited 5 months ago) (11 children)

Once I would just like to see an explaination from the AI doomers how, considering the limited capacities of Turing style machines, and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don't need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods), how AGI can be an existential risk, it cannot by definition surpass the limits of Turing machines via any of the proposed hypercomputational methods (as then turning machines are hyperturing and the whole classification structure crashed down).

I'm not a smart computer scientist myself (I did learn about some of the theories as evidenced above) but im constantly amazed at how our hyperhyped tech scene nowadays seems to not know that our computing paradigm has fundamental limits. (Everything touched by Musk extremely has this problem, with capacity problems in Starlink, Shannon Theoritically impossible compression demands for Neuralink, everything related to his tesla/AI related autonomous driving/robots thing. (To further make this an anti-Musk rant, he also claimed AI would solve chess, solving chess is a computational problem (it has been done for 7x7 board iirc), which just costs a lot of computation time (more than we have), if AI would solve chess, it would side step that time, making it a superturing thing, which makes turing machines superturing (I also can't believe that of all the theorethical hypercomputing methods we are going with the oracle method (machine just conjures up the right method, no idea how), the one I have always mocked personally) which is theoretically impossible and would have massive implications for all of computer science) sorry rant over).

Anyway, these people are not engineers or computer scientists, they are bad science fiction writers. Sorry for the slightly unrelated rant, it was stuck as a splinter in my mind for a while now. And I guess that typing it out and 'telling it to earth' like this makes me feel less ranty about it.

E: of course the fundamental limits apply to both sides of the argument, so both the 'AGI will kill the world' shit and 'AGI will bring us to posthuman utopia of a googol humans in postscarcity' seem unlikely. Unprecedented benefits? No. (Also im ignoring physical limits here as well, a secondary problem which would severely limit the singularity even if P=NP).

E2: looks at title of OPs post, looks at my post. Shit, the loons ARE at it again.

[–] o7___o7@awful.systems 10 points 5 months ago* (last edited 5 months ago) (5 children)

and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don’t need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods),

Building a scifi apocalypse cult around LLMs seems like a missed opportunity when there are much more interesting computer science toys lying around. Like you pointed out, there's the remote possibility that P=NP, which is also largely unexplored in fiction. There is a fun little low-budget movie called The Traveling Salesman about this exact scenario, where several scientists are locked in a room deciding what to do with their discovery when the government tries to squeeze them for it. Very 12 Angry Men.

My fav example of the micro-genre is The Laundry Files book series by Charles Stross (who visits these parts!). In the first book, The Atrocity Archives, it turns out that any mathematical proof that P=NP is a closely guarded state secret; so much so that the British government has an entire MoD agency dedicated to rounding up and permanently employing people who discover The Truth. This is because drawing a graph that summons horrors from beyond space-time (brain-eating parasites, hungry ghosts, Cthulhu, a competent Tory politician, etc) is an NP-complete problem. You really don't want an efficient algorithm for solving 3SAT to show up on reddit.

I mean, you could also use it to steal bitcoin and make robots, but pfft.

I'm not doing the series justice. I love how Bob, Mo, Mhari, and co grow and change, and their character arcs really hit home for me, as someone who more-or-less grew up alongside the series, not to mention the spot-on social commentary.

[–] BigMuffin69@awful.systems 7 points 5 months ago (1 children)

AH THE TSP MOVIE IS SO FUN :)

btw, as a shill for big MIP, I am compelled to share this site which has solutions for real world TSPs!

https://www.math.uwaterloo.ca/tsp/world/

[–] o7___o7@awful.systems 4 points 5 months ago (1 children)

Rad as heck!

btw, (sorry if this is prying!) considering your line of work, is all of this acausal robot god stuff especially weird and off-putting for you? Do your coworkers seem to be resistant to it?

[–] BigMuffin69@awful.systems 6 points 5 months ago* (last edited 5 months ago)

Not prying! Thankful to say, none of my coworkers have ever brought up ye olde basilisk, the closest anyone has ever gotten has been jokes about the LLMs taking over, but never too seriously.

No, I don't find the acasual robot god stuff too weird b.c. we already had Pascal's wager. But holy shit, people actually full throat believing it to the point that they are having panic attacks wtf. Like:

  1. Full human body simulation -> my brother-in-law is a computational chemist, they spend huge amounts of compute modeling simple few atom systems. To build a complete human simulation, you'd be computing every force interaction for approx ~ 10^28 atoms, like this is ludicrous.

  2. The chuckle fucks who are posing this are suggesting ok, once the robot god can sim you (which again, doubt), it's going to be able to use that simulation of you to model your decisions and optimize against you.

So we have an optimization problem like:

min_{x,y} f(x) s.t. y in argmin{ g(x,y),(x,y) in X*Y}

where x and f(x) would be the decision variables and obj function 🐍 is trying to minimize, and y and g(x,y) is the objective of me, the simulated human who has its own goals, (don't get turned to paperclips).

This is a bilevel optimization problem, and it's very, very nasty to solve. Even in the nicest case possible, that somehow g,f, are convex functions and X,Y are all convex sets, (which is an insane ask considering y and g entails a complete human sim), this problem is provably NP-hard.

Basically, to build the acasual god, first you need a computer larger than the known universe, and this probably isn't sufficient.

Weird note: while I was in academia, I actually did do some work on training ANN to model the constraint that y is a minimizer of a follower problem by using an ANN to act as a proxy for g(x,*), and then encoding a representation of the trained network into a single level optimization problem... we got some nice results for some special low dim problems where we had lots of data🦍 🦍 🦍 🦍 🦍

load more comments (3 replies)
load more comments (8 replies)