this post was submitted on 05 Jun 2023
60 points (100.0% liked)

Technology

37719 readers
337 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

and as always, the culprit is ChatGPT. Stack Overflow Inc. won't let their mods take down AI-generated content

you are viewing a single comment's thread
view the rest of the comments
[–] Hyperz@beehaw.org 6 points 1 year ago (2 children)

Yeah that's a good point. I have no idea how you'd go about solving that problem. Right now you can still sort of tell sometimes when something was AI generated. But if we extrapolate the past few years of advances in LLMs, say, 10 years into the future... There will be no telling what's AI and what's not. Where does that leave sites like StackOverflow, or indeed many other types of sites?

This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

[–] 14specks@lemmy.ml 9 points 1 year ago (1 children)

This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

Between this and the "deep fake" tech I'm kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that's kind of a hot take

[–] Hyperz@beehaw.org 8 points 1 year ago

But then they'd have to break up with their AI girlfriends/boyfriends 🤔.

spoilerI wish I was joking.

[–] cavemeat@beehaw.org 8 points 1 year ago (1 children)

My guess is the internet is gonna go through trial by fire regarding ai—some stuff is gonna be obscenely incorrect, or difficult to detect before it all straightens out.

[–] DM_Gold@beehaw.org 5 points 1 year ago (1 children)

At the end of the day AI should be classified as what it is, a tool. We could embrace this tool and use it to our advantage, or we can fight it all the way up...even as more folks start to use it.

[–] Lowbird@beehaw.org 8 points 1 year ago* (last edited 1 year ago) (1 children)

Its threat to jobs wouldn't be anywhere near so much an issue if people just.. Had medical care and food and housing regardless of employment status.

As is, it's primarily a tool for the ultra wealthy to boost productivity while cutting costs, aka humans. All of which resulting profit and power will just further line the pockets of the 1%.

I'd have no issue with AI... If and only if we fixed the deeper societal problems first. As is, it's salt in the wounds and can't just be ignored.

[–] sazey@kbin.social 2 points 1 year ago

Almost any innovation in human history has been used by the elite to advance their own selves first. That just happens to be the nature of power and wealth, it affords you opportunities that wouldn't be available to plebs.

We would still be sitting around waiting for the wheel to become commonplace if the adoption criteria was to wait for all societal problems to be fixed before its spread through society.