this post was submitted on 17 Jun 2024
135 points (100.0% liked)

Technology

37712 readers
179 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
all 31 comments
sorted by: hot top controversial new old
[–] Gamers_Mate@fedia.io 27 points 4 months ago

I hope this causes a class action against these companies.

[–] PeterBronez@hachyderm.io 7 points 4 months ago (2 children)

@along_the_road

“These were mostly family photos uploaded to personal and parenting blogs […] as well as stills from YouTube videos"

So… people posted photos of their kids on public websites, common crawl scraped them, LAION-5B cleaned it up for training, and now there are models. This doesn’t seem evil to me… digital commons working as intended.

If anyone is surprised, the fault lies with the UX around “private URL” sharing, not devs using Common Crawl

#commoncrawl #AI #laiondatabase

[–] wagoner@infosec.pub 8 points 4 months ago

Doesn't Digital Commons mean common ownership? A personal blog of family photos inherently owned by that photographer are surely not commonly owned. I see this as problematic.

[–] PeterBronez@hachyderm.io 2 points 4 months ago (1 children)

@along_the_road what’s the alternative scenario here?

You could push to remove some public information from common crawl. How do you identify what public data is _unintentionally_ public?

Assume we solve that problem. Now the open datasets and models developed on them are weaker. They’re specifically weaker at identifying children as things that exist in the world. Do we want that? What if it reduces the performance of cars’ emergency breaking systems? CSAM filters? Family photo organization?

[–] kent_eh@lemmy.ca 1 points 4 months ago (1 children)

what’s the alternative scenario here?

Parents could not upload pictures of their kids everywhere in a vain attempt to attract attention to themselves?

That would be good.

[–] PeterBronez@hachyderm.io 1 points 4 months ago

@kent_eh exactly.

The alternative is “if you want your content to be private, share it privately.”

If you transmit your content to anyone who sends you a GET request, you lose control of that content. The recipient has the bits.

It would be nice to extend the core technology to better reflect your intent. Perhaps embedding license metadata in the images, the way LICENSE.txt travels with source code. That’s still quite weak, as we saw with Do Not Track.

[–] autotldr@lemmings.world 3 points 4 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryPhotos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

HRW's report warned that the removed links are "likely to be a significant undercount of the total amount of children’s personal data that exists in LAION-5B."

Han told Ars that "Common Crawl should stop scraping children’s personal data, given the privacy risks involved and the potential for new forms of misuse."

There is less risk that the Brazilian kids' photos are currently powering AI tools since "all publicly available versions of LAION-5B were taken down" in December, Tyler told Ars.

That decision came out of an "abundance of caution" after a Stanford University report "found links in the dataset pointing to illegal content on the public web," Tyler said, including 3,226 suspected instances of child sexual abuse material.


Saved 78% of original text.

[–] eveninghere@beehaw.org 3 points 4 months ago

Did we ever agree on AI training with our Reddit comments btw