this post was submitted on 07 Jun 2024
51 points (100.0% liked)

TechTakes

1401 readers
142 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kbal@fedia.io 23 points 5 months ago (4 children)

This is why it's best to never admit that you're wrong on the Internet. If we start doing that the LLMs trained on our comments might learn to do the same, and then where would we be?

[–] MotoAsh@lemmy.world 13 points 5 months ago

It's OK, the pride of stupid people will guarantee there is always a large swathe of confidantly wrong answers out there even if the "AI"s don't hallucinate them.

That's why I knew LLMs alone would never cut it. They do ZERO logic, and humans who DO execute logic sometimes still get it horribly wrong a lot. It takes more than the equivalent of a dreamer's illogical dreamscape of relationships to produce logic, and LLMs are a far cry short of a dreamer...

[–] Soyweiser@awful.systems 12 points 5 months ago

Soon saying 'GPT, write me a speech' will end up giving you a speech that ends with "please like an subscribe, and don't forget to click the bell"

[–] froztbyte@awful.systems 8 points 5 months ago

Nah it’s all good. You can trip the dumb pieces of shit up with simple math - imagine what you could do with double negatives. And that’s presuming you stick to a single language…

the copypasta machine is just real bad in many ways, and it doesn’t take much to shove it over the edge[0]

[0] - reducing the surface area of this is one of oai’s primary actions/tasks, but it’s a losing battle: there’s always more humanity than they’ll have gotten around to coding synth rules for

[–] prex@aussie.zone 3 points 5 months ago

That does it: I'm boycotting /s

/s