253
this post was submitted on 06 Dec 2023
253 points (95.7% liked)
Technology
59197 readers
3391 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
ChatGPT was very far from the first publically available generative AI. It didn't even do images at first.
Also, there are plenty of YouTube channels which show you how to make all sorts of extremely dangerous explosives already.
But the concern isn't which was the first generative ai - their "idea" was that AIs - of all types, including generalised - should just be released as-is, with no further safeguards.
That doesn't consider that OpenAI doesn't only develop text generation AIs. Generalised AI can do horrifying things, even just by accidental misconfiguration (see the paperclip optimiser example).
But even a GANN like chatGPT can be coerced to generate non-text data with the right prompting.
Even in that example, one can't just dig up those sorts of videos without, at minimum, leaving a trail. But an unresticted pretrained model can be distributed and run locally, and used without trace to generate any content whatsoever that it's capable of generating.
And with a generalised AI, the only constraint to the prompt "kill everybody except me" becomes available compute.