this post was submitted on 19 Dec 2024
38 points (100.0% liked)

TechTakes

1483 readers
147 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] Architeuthis@awful.systems 12 points 8 hours ago* (last edited 8 hours ago) (2 children)

Slate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.

Can't help but wonder if he's just a critihype enabling useful idiot who refuses to know better or if he's being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.

edit: The claude syllogistic scratchpad also makes an appearance, it's that thing where we pretend that they have a module that gives you access to the LLM's inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of "So what were you thinking when you wrote so and so, remember no one can read what you reply here". Que a bunch of people in the comments moving straight into wondering if Claude has qualia.

[–] istewart@awful.systems 3 points 23 minutes ago

I feel like "qualia" is both an interesting concept, and a buzzword that has rapidly grown to indicate people who need to be aggressively ignored.

[–] o7___o7@awful.systems 10 points 7 hours ago* (last edited 7 hours ago) (2 children)

I used to think that comparing LLMs to people was dumb, because LLMs are just feed-forward networks--basically seven bipartite graphs in a trench coat--that are incapable of introspection.

However, I'm coming around to the notion that some of our drive-by visitors have a brain that's seven cells deep.

[–] prex@aussie.zone 1 points 1 hour ago* (last edited 1 hour ago)

I feel attacked.
Seriously I hate the idea that my comments are replies to engagement bots. I'm sure some are but my seven cells are too busy to work out which ones.

edit: cell

[–] leftzero@lemmynsfw.com 6 points 6 hours ago

Yeah, general artificial intelligence LLMs are definitely not. Human level intelligence, though... yeah, that depends on what particular human you're talking about.

(Though, to be fair, this isn't limited to LLMs... it also applies to Eliza, for instance, or your average lump of granite.)

[–] PhilipTheBucket@ponder.cat -5 points 8 hours ago (4 children)

AI developers need to generate criti-hype — “criticism” that says the AI is way too cool and powerful and will take over the world, so you should give them more funding to control it.

This isn’t quite accurate. The criticism is that if new AI abilities run ahead of the ability to make the AI behave sensibly, we will reach an inflection point where the AI will be in charge of the humans, not vice versa, before we make sure that it won’t do horrifying things.

AI chat bots that do bizarre and pointless things, but are clearly capable of some kind of sophistication, are exactly the warning sign that as it gains new capabilities this is a danger we need to be aware of. Of course, that’s a separate question from the question of whether funding any particular organization will lead to any increase in safety, or whether asking a chatbot about some imaginary scenario has anything to do with any of this.

[–] Amoeba_Girl@awful.systems 8 points 4 hours ago (1 children)

what if the AI sprouts wings and flies into the sky where we can't reach it?

[–] froztbyte@awful.systems 2 points 3 hours ago

maybe that’s how the moon got mad - annoying goddamn chatbots flying in its view the whole time

[–] nightsky@awful.systems 13 points 7 hours ago (1 children)

With your choice of words you are anthropomorphizing LLMs. No valid reasoning can occur when starting from a false point of origin.

Or to put it differently: to me this is similarly ridiculous as if you were arguing that bubble sort may somehow "gain new abilites" and do "horrifying things".

[–] self@awful.systems 9 points 6 hours ago (1 children)

I had assumed the golden age of people coming here to critihype LLMs was over because most people outside of Silicon Valley (including a lot of nontechnical people) have realized the technology’s garbage but nope! we’ve got a rush of posters trying the same shit that didn’t work a year ago, as if we’ve never seen critihype before. maybe bitcoin hitting $100,000 makes them think their new grift is gonna make it? maybe their favorite fuckheads entering office is making all their e/acc dreams come true? who can say.

[–] dgerard@awful.systems 11 points 6 hours ago* (last edited 6 hours ago) (2 children)

in crypto, these guys run on a six to eighteen month cycle - at get in, evangelise, get rekt and disappear in embarrassment. What this means is that the only people who actually remember the history of crypto are the critics.

i once had a coiner demand in outrage that i prooove my claim that bitcoin was started by libertarians.

anyway. dunno if the same will hold in AI grift, but yeah recycling refuted claims as if nothing happened is standard in other areas of pseudoscience.

[–] shnizmuffin@lemmy.inbutts.lol 4 points 2 hours ago

i once had a coiner demand in outrage that i prooove my claim that bitcoin was started by libertarians.

... did they know what a libertarian is?

[–] khalid_salad@awful.systems 8 points 5 hours ago (1 children)

i once had a coiner demand in outrage that i prooove my claim that bitcoin was started by libertarians.

who the fuck else would start it?

[–] sc_griffith@awful.systems 3 points 1 hour ago (1 children)

a libertarian, a pedophile and an early crypto enthusiast walk into a bar

[–] blakestacey@awful.systems 4 points 1 hour ago

"Drinking alone tonight?" the bartender asks.

[–] Architeuthis@awful.systems 14 points 8 hours ago (1 children)

What new AI abilities, LLMs aren't pokemon.

[–] self@awful.systems 12 points 8 hours ago (1 children)

AI chat bots that do bizarre and pointless things, but are clearly capable of some kind of sophistication, are exactly the warning sign that as it gains new capabilities this is a danger we need to be aware of.

hahahaha nope

[–] PhilipTheBucket@ponder.cat -5 points 8 hours ago (3 children)

Here’s a video of an expert in the field saying it more coherently and at more length than I did:

https://youtu.be/zkbPdEHEyEI

You’re free to decide that you are right and we are wrong, but I feel like that’s more likely to be from the Dunning-Kruger effect than from your having achieved a deeper understanding of the issues than he has.

[–] Amoeba_Girl@awful.systems 9 points 4 hours ago* (last edited 4 hours ago)

For anyone who rightfully can't be arsed to click that link, the expert is "Robert Miles AI Safety", who I assume is an expert (a youtuber) in the madeup field of "AI safety".

Not to be confused with the late and great dream trance producer Robert Miles whom we all love dearly.

[–] self@awful.systems 12 points 7 hours ago

who the fuck is “we”? you’re some asshole who bought the critihype so hard you think that when the chatbot does dumb computer shit that only proves it’s more human and more dangerous. you’re not in on this grift, you’re a mark.

[–] o7___o7@awful.systems 12 points 7 hours ago* (last edited 7 hours ago) (1 children)
[–] dgerard@awful.systems 9 points 6 hours ago