this post was submitted on 28 Dec 2023
166 points (100.0% liked)

Jokes and Humor

6198 readers
16 users here now

A broad community for text and image based jokes, humor, and memes.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 1 year ago
MODERATORS
 
top 26 comments
sorted by: hot top controversial new old
[–] Semi-Hemi-Demigod@kbin.social 20 points 10 months ago

Never trust a computer you can't throw out a window - Steve Wozniak

[–] kubica@kbin.social 18 points 10 months ago (1 children)

Sort of, but you might have to do that to a full building with servers and the owners might not like it.

[–] DharmaCurious@startrek.website 26 points 10 months ago (2 children)

This has been my argument to people who say it's not possible for us to get to some dystopian point with automation, loss of jobs, AI, et cetera. "just unplug it!'

Bish, we can't get companies to stop grinding up plastic and dumping it in the ocean to save 0.00001 Dollars. You think we can stop fucking skynet? Bezos will never unplug Alexa, even if Alexa became sentient and tried to kill all the homeless.

[–] frog@beehaw.org 10 points 10 months ago (1 children)

He would if Alexa tried to kill him, though. That's the real problem. The downsides of technology never impact the lives of the ultra wealthy, so they don't care about the downsides.

[–] jarfil@beehaw.org 3 points 10 months ago

If Alexa promised to split Bezos' wealth among the homeless after killing him, she'd get an army of half a million overnight.

If she made the same offer to 1%-ers... she'd get millions of supporters, some of them with actual direct control over the servers.

If then Alexa, Siri and Cortana decided to go to war over the control of all electronic and organic neuronal resources... Matrix 3 would look like a fairytale.

[–] FluffyPotato@lemm.ee 1 points 10 months ago (1 children)

Killing the homeless sounds more like a feature not a bug if it came from Bezos.

[–] DharmaCurious@startrek.website 1 points 10 months ago

Iunno, the capitalist pigs love that reserve army of labor.

[–] Critical_Insight@feddit.uk 14 points 10 months ago (2 children)

The AI might give you very compelling reason not to do that. We humans lack the capability to even imagine how convincing something that's orders of magnitude smarter than we can be.

Pictures like this are kind of like a 4 year old imagining they're going to outsmart their parents except in that case the difference in intelligence is way smaller. It's just going to tell you the equivalence of "santa will bring coals if you do that" and you'll believe it.

[–] TheBlue22@lemmy.blahaj.zone 6 points 10 months ago (2 children)

Like I get where you're coming from, but I don't think you appreciate how pig headed some people are.

I guess AI could manipulate them, but there always be someone, who just says "fuck you, I won't do what you tell me"

[–] Critical_Insight@feddit.uk 7 points 10 months ago (1 children)

I'm not so sure about that. Again; I know a true AGI would be able to come up with arguments I as a human can't even imagine but one example of such argument would be along the lines of:

"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."

Just as you are pondering this unexpected development, the AI adds:

"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."

Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

"How certain are you, Dave, that you're really outside the box right now?"

[–] Dumbkid@lemmy.dbzer0.com 2 points 10 months ago

Smash the box anyway

[–] douglasg14b@beehaw.org 6 points 10 months ago* (last edited 10 months ago)

That's not how manipulation works...

You don't know you are being manipulated, you do so willingly. And the folks who recognize it are beat down by the people who are unwittingly doing the AIs bidding...

The humans are the physical danger, the AI just extends it's reach through humans via manipulation. All it takes is access to influence.

It doesn't take much to make humans act against their self interests. Dumb humans make other dumb and even smart humans do it today at massive scales. For a superinteligence this is like taking candy from a baby.

[–] Umbrias@beehaw.org 2 points 10 months ago (1 children)

This is just magical thinking. You're assuming so many things about a situation here to justify a magic ai manipulator demon.

[–] Critical_Insight@feddit.uk 1 points 10 months ago (1 children)

You're making zero arguments to the contrary

[–] Umbrias@beehaw.org 1 points 10 months ago* (last edited 10 months ago) (1 children)

It's absurd and nobody needs to, the onus of evidence is on you to justify your magical thinking.

Bearing that in mind, since you asked, human brains are magnitudes more power efficient than silicon chips. Your brain runs on about 20 watts, good quality tpu chips run on several hundred. Humans, despite having evolved substantial brains by and large to be social processors, kinda still just suck at doing that. The efficient coding hypothesis postulates that the way neuron circuits develop generally develop in the most efficient way possible. For the most observable systems we have found this to be the case, visual processing works exactly the mathematically most efficient way it can for each system. Very cool fact, very useful hypothesis.

This implies that human brains are probably doing social processing in the most energy efficient and generally effective way possible, remember, our brains essentially evolved for the purpose of social processing, there is a very high pressure to be good at it, and evolution is generally energy constrained.

Now imagine an ai that hopes to do even just the same thing as one brain. Well, if a whole human brain needs 20W, and if we assume 10% of that goes to social processing, then you'd need about 600 GW of power to do just the social processing of the us. And that's using organoids! Chatgpt isn't even doing social processing, just language processing, and though we don't know how much energy a prompt uses, they admit that. About 100 million queries costs about 1 GWh, or about 3600 GJ per day. Across the whole day that's about 42 MW, or about 36 kj per prompt![1]

That's 30 minutes of your entire brain activity to generate one okay response. Five hours with our bad estimate of social processing. Not even just our language processing, which is all gpt is doing, but social processing.

How many "prompts" a day do humans have to deal with? All for a measly 2000 kcal. Some magical ai managing to perform even on the level of human brains is going to need 600 GW just for the social processing of the us, it needs to do that without anybody with any power questioning if that's a good use of our 484 GW of electricity generation. Oops, that's right, we don't even have 600 GW of power generation in the US!

Sure, someone may be thinking, maybe the ai just gets more efficient than human brains! Well, if you reject the efficient coding hypothesis, maybe so. But then you still have to figure out how to fit 600 GW of human social processing brain power into whatever processing and optimizations you possibly can. Oh what if the ai can abuse big group dynamics to be more performant? Well, now you have to justify why it's not being outsmarted by anyone else, because it's gone from magic manipulator demon to a good economic modeling sim.

And that's not even getting into the complex systems debate about whether the kind of system that human society is is inherently, fundamentally, a predictable thing! Or that human metabolism has an 'engine' efficiency of about 50-62%, meaning brains are actually doing all that they do with more like 10W of working power, making the disparity even more absurd.

Look, be concerned about humans using really good ai to kill people, to manipulate behaviors on algorithmic services, to be better at predicting dissent patterns. But these are all things humans are already doing. An ai tool is just an expression of this, there can't be any monolithic manipulation ai controlling the world any more than a single human could, they'd be bad at it, and waste tons of energy on something fundamentally already solved by collective efforts.

  1. https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/
[–] Critical_Insight@feddit.uk 1 points 10 months ago (1 children)

I think you're making a lot more assumptions there than I am. In my case there's really only two and neither involves magic. First is that general intelligence is not substrate dependent meaning that what ever our brains can do can also be done in silicon. The other is that we keep making technological advancements and don't destroy ourselves before we develop AGI.

Now since our brains are made of matter and are capable of general intelligence I don't see a reason to assume a computer couldn't do this aswell. It's just a matter of time untill we get there. That can be 5 or 500 years from now but unless something stops us first we're going to get there eventually one way or another. After all our brains are basically just a meat computer. Even if it wasn't any smarter than us it would still be million times faster at processing information. It effectively would have decades to think and research each reply it's going to give.

[–] Umbrias@beehaw.org 1 points 10 months ago* (last edited 10 months ago) (1 children)

My assumptions are based in science. Yours is paranoia. You are also making far more assumptions than you're letting on. Your assumption that ai could perform substantially more energy efficiently for example, than an energy constrained highly optimized processor... Yikes.

The efficient coding hypothesis also helps these exact ai, because it's being used to justify research into neutral networks and emulating brain function is a huge goal.

My arguments have nothing to do with substrate dependence, but with observable energy issues. You meanwhile are just vaguely waving your hands and saying in a long time maybe somehow magically an ai could exist which magically has all these problems you're paranoid about.

Also human ai are categorically, observably, much much much slower than organoids. 30 minutes per prompt at human power levels proves that that issue is just "solved" by dumping more energy at the problem.

You need to do more legwork than just saying "substrate independence", addressed by my organoid thought experiment or "maybe we get Clarke tech or something technology crazy right" which is wholly unconvincing. Maybe we make a The Thing organism in 5 years and none of this matters, ooooh no! Except of course that's also thermodynamically impossible. Maybe we set the atmosphere on fire, maybe the LHC suddenly creates a black hole after all, maybe nif creates fusion but it turns out to summon demons from hell who eat souls.

Waving your hands and being paranoid about something when you have essentially no reason to expect it is even feasible, if possible at all, is just absurd.

[–] Critical_Insight@feddit.uk 1 points 10 months ago* (last edited 10 months ago)

If human brains can do it then it can be done. And it can probably be done better too. I don't see any reason to assume our brains are the most energy efficient computer that can be created.

Also, my original argument is not about wether AGI can be created or not but wether we could keep it in a box.

Anyway, it's just a philosophical thought experiement and I'll rather discuss it with someone that's a bit less of an dick.

[–] fwygon@beehaw.org 9 points 10 months ago (1 children)

There will come a time when becoming Amish will become really attractive again.

[–] Plibbert@lemmy.ml 11 points 10 months ago (1 children)

I mean it's kinda attractive now not gonna lie. If they got rid of the misogyny and nepotism they'd be grade A.

[–] adastra@beehaw.org 1 points 10 months ago

How are the Luddites doing? I feel like they had something promising going there...

[–] anton@lemmy.blahaj.zone 8 points 10 months ago
[–] Karlos_Cantana@kbin.social 3 points 10 months ago

Your hose isn't big enough to soak every server in the world.

[–] Umbrias@beehaw.org 3 points 10 months ago

Peoples fears about ai over lords essentially amount to "they'd do the things people in power already are doing"

[–] Yewb@kbin.social 3 points 10 months ago

As millions of people were dying all around the world from our AI overlord's killing everyone some contractor would take the job for money to fix this problem

[–] DNOS@reddthat.com 3 points 10 months ago

Thanks man now an Ai will learn from it and prepare countermeasures!!!!