this post was submitted on 23 Jul 2023
219 points (100.0% liked)

Technology

37717 readers
413 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

65% of Americans support tech companies moderating false information online and 55% support the U.S. government taking these steps. These shares have increased since 2018. Americans are even more supportive of tech companies (71%) and the U.S. government (60%) restricting extremely violent content online.

you are viewing a single comment's thread
view the rest of the comments
[–] mojo@lemm.ee 2 points 1 year ago (2 children)

Big tech should be doing so, but they already get pressure from advertisers to moderate that away. For the smaller ones like the insane right winger sites, let them spew nonsense, since it sets a dangerous precedent for everyone else if they're not allowed to shit out misinformation and bigotry.

That's where the internet should very much be made free. There's too many cases of legitimate websites that can be shut down through these means. We need to correct misinformation with correct information.

[–] alyaza@beehaw.org 8 points 1 year ago (1 children)

We need to correct misinformation with correct information.

my counterpoint to that would be: hasn't COVID demonstrated how ineffective this actually is versus deplatforming incorrect information to begin with? people did do a lot of that during the pandemic, and yet there are still tens, possibly hundreds of millions of people, who were misled and now believe things contrary to even basic science. i think to a lesser extent the anti-vaxx movement has demonstrated similar limits to this approach against deplatforming.

[–] mojo@lemm.ee 4 points 1 year ago* (last edited 1 year ago)

There's a lot more reasons that count for that. That became a very big social and political issue, which is a separate issue. I ask you then, how would we enforce this if non megacorps were forced to comply? Should decentralized media like Lemmy be forced to comply as well? Would US be forced to defederate with other country instances in the situation they may not govern their speech? Would encrypted communication that cannot be governed get banned so we can carefully make sure misinformation doesn't get spread? As soon as you think of the how would this be enforced, you quickly realize how terrible of an idea this is.

Of course it's easy to say misinformation bad, nobody is disagreeing with you. But actually going into actual practical solutions that maintain freedom and privacy is the hard part.

[–] Veraticus@lib.lgbt 1 points 1 year ago (1 children)

This seems a rather naïve point of view unfortunately.

People are persuaded and misled by misinformation all the time, even relatively smart people. Correct information being available does not mean that people will be able to choose correctly between correct information and misinformation; or, if already misinformed, that they will suddenly realize they've been misled and abandon their false beliefs.

The way to combat it is not to present correct information and pray that people make an informed decision, it's to stem the spread of bad information before it can gain converts. We already do this for some information we deem simply too harmful for society (child porn, terrorism). Given, say, COVID misinformation cost thousands of lives and millions of dollars, I would say it certainly should be added to that list.

[–] mojo@lemm.ee 3 points 1 year ago (1 children)

Absolutely not, it's a slippery slope. It's one of those "think of the children!" arguments where we decide what words are too harmful.

If they wanted to actually go and block misinformation on the web, why would they not also ban e2ee communication? It's clearly a loophole where they could potentially be spreading misinformation that is ungoverned!

[–] Veraticus@lib.lgbt 4 points 1 year ago (1 children)

I entirely disagree it's a slippery slope. We already have child pornography and anti-terrorism laws that platforms must follow, and yet we have somehow failed to fall down any further slopes (and in fact these are illegal even with e2ee communication). Yet e2ee communication and Facebook and Twitter continue to exist.

Why would adding misinformation to this list cause that to change?

Secondly, your argument can be used, exactly as you are making it, to say that child pornography and terrorist content on the Internet are fine actually. Why not simply allow its publication but tell people it's bad and not to pay attention to it?

[–] mojo@lemm.ee 6 points 1 year ago (1 children)

Actually these exact arguments are already being used to try and ban encryption.

See the UK: https://en.m.wikipedia.org/wiki/Encryption_ban_proposal_in_the_United_Kingdom

We've already had multiple laws in the US attempting the same: https://cyberlaw.stanford.edu/blog/2020/01/earn-it-act-how-ban-end-end-encryption-without-actually-banning-it

Even the UN is trying to get together and ban it in multiple countries: https://www.justice.gov/opa/pr/international-statement-end-end-encryption-and-public-safety

[–] Veraticus@lib.lgbt 3 points 1 year ago (1 children)

So since this is already happening, where exactly does your slippery slope objection come in? Why is this information germane to this specific argument?

[–] mojo@lemm.ee 2 points 1 year ago (1 children)

I've said it like three times already

[–] Veraticus@lib.lgbt 4 points 1 year ago (1 children)

You rang the alarm bell about this being a slippery slope that will lead to attempts to ban e2ee, but as you yourself demonstrated this is already happening, so I'm not sure how anything you've said applies to restricting false information online... and how it doesn't also apply to, say, bans on child pornography, unless you disagree with those too?

[–] mojo@lemm.ee 4 points 1 year ago

Because those laws are bad and it will increase those laws, and is one of those bad laws... I've said this multiple times now. CP is an entirely different story and is universally banned, nobody wants that on their servers. Speech is another thing. Countries will never agree on misinfo and what's truth, and it'll be a constant game who servers banning traffic from different countries depending on if they agree or disagree with what they consider/enforce is misinfo. Misinfo idiots should still be getting banned off of social media, but the law should not be involved.