this post was submitted on 10 Jan 2024
116 points (92.6% liked)

Technology

59436 readers
3376 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BertramDitore@lemmy.world 0 points 10 months ago

Disinformation, which comes from self-serving and agenda-driven swaths of the world's population (meaning people, not AI), will be amplified by AI-powered tools. The tools themselves are not necessarily the problem (though of course they sometimes are), but if the datasets they steal (sorry, use) to train their models are filled with dis and misinformation, then obviously their outputs will be filled with the same. We should tackle the inputs first, and then the outputs will be less likely to misinform.

In order for the inputs to be better, we need a quality free press and faith in our public institutions. So most of the world is not in great shape when it comes to those…

We also need to be able to easily see inside the workings of the AI models so we can pinpoint exactly how the misinformation is being generated, so we can take steps to fix it. I understand this is currently a pretty challenging technical issue, but frankly I don’t think AI tools should ever be made public until they are fully transparent about their sourcing.