this post was submitted on 23 Jul 2023
219 points (100.0% liked)

Technology

37716 readers
421 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

65% of Americans support tech companies moderating false information online and 55% support the U.S. government taking these steps. These shares have increased since 2018. Americans are even more supportive of tech companies (71%) and the U.S. government (60%) restricting extremely violent content online.

you are viewing a single comment's thread
view the rest of the comments
[–] sub_@beehaw.org 23 points 1 year ago (1 children)

65% of Americans support tech companies moderating false information online

aren't those tech companies the one who kept boosting false information on the first place to get ad revenue? FB did it, YouTube did it, Twitter did it, Google did it.

How about breaking them up into smaller companies first?

I thought the labels on potential COVID or election disinformation were pretty good, until companies stopped doing so.

Why not do that again? Those who are gonna claim that it's censorship, will always do so. But, what's needed to be done is to prevent those who are not well informed to fall into antivax / far-right rabbit hole.

Also, force content creators / websites to prominently show who are funding / paying them.

[–] Steeve@lemmy.ca 2 points 1 year ago

aren't those tech companies the one who kept boosting false information on the first place to get ad revenue?

Not really, or at least not intentionally. They push content for engagement, and that's engaging content. It works the same for vote based systems like Reddit and Lemmy too, people upvote ragebait and misinformation all the time. We like to blame "the algorithm" because as a mysterious evil black box, but it's really just human nature.

I don't see how breaking them up would stop misinformation, because no tech company (or government frankly) actually wants to be the one to decide what's misinformation. Facebook and Google have actually lobbied for local governments to start regulating social media content, but nobody will touch it, because as soon as you start regulating content you'll have everyone screaming about "muh free speech" immediately.