this post was submitted on 03 Jul 2023
105 points (100.0% liked)

Technology

23 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 1 year ago
 

An update to Google's privacy policy suggests that the entire public internet is fair game for it's AI projects. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

you are viewing a single comment's thread
view the rest of the comments
[–] peter@feddit.uk 3 points 1 year ago (1 children)

I don't think they belong to the company any more than the words you read belong to you

[–] FaceDeer@kbin.social 6 points 1 year ago (1 children)

Exactly, this reads like hysteria. If you've placed your words on a public website, it's a shocked Pikachu moment when someone (or in the case of an AI-in-training something) reads those words. It's basic fair use.

If someone put up a billboard with some text on it and then got angry whenever someone else read it I would question their sanity. Even if that "someone" was the Google street view car.

[–] LostXOR@kbin.social 5 points 1 year ago (1 children)

Yeah, I don't really see the fuss about people's content being used to train AIs. It's not really any different from a human reading your content and using their brain to make something similar.

[–] clb92@kbin.social 1 points 1 year ago* (last edited 1 year ago)

There's a surprising number of people who seem to think LLMs contain a database of everything it's trained with, and that it just spits out snippets from there. There are also lots of very vocal artists against image generation models who claim that these 5-10 GB models contain all their copyrighted art, claiming that the models just create collages from existing images.

People simply don't understand how these things work.