this post was submitted on 15 Apr 2024
420 points (93.2% liked)

Solarpunk

5513 readers
138 users here now

The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.

What is Solarpunk?

Join our chat: Movim or XMPP client.

founded 2 years ago
MODERATORS
 

I found that idea interesting. Will we consider it the norm in the future to have a "firewall" layer between news and ourselves?

I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said "when you will have time, there is an emotional news that does not require urgent action that you will need to digest". I feel it could become the norm.

EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.

EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as "incredibly atrocious crime done to CHILDREN and you are a monster for not caring!". The second one does feel a lot like exploit of emotional backdoors in my opinion.

you are viewing a single comment's thread
view the rest of the comments
[–] ondoyant@beehaw.org 0 points 7 months ago

i have a general distaste for the mind/computer analogy. no, tweets aren't like malware, because language isn't like code. our brains were not shaped by the same forces that computers are, they aren't directly comparable structures that we can transpose risks onto. computer scientists don't have special insight into how human societies work because they understand linear algebra and network theory, in the same way that psychologists and neurologists don't have special insight into machine learning because they know how the various regions of the human brain interact to form a coherent individual mind, or the neural circuits that go into sensory processing.

i personally think that trying to solve social problems with technological solutions is folly. computers, their systems, the decisions they make, are not by nature less vulnerable to bias than we are. in fact, the kind of math that governs automated curation algorithms happens to be pretty good at reproducing and amplifying existing social biases. relying on automated systems to do the work of curation for us isn't some kind of solution to the problems that exist on twitter and elsewhere, it is explicitly part of the problem.

twitter isn't giving you "direct, untrusted" information. its giving you information served by a curation algorithm designed to maximize whatever it is twitter's programmers have built, and those programmers might not even be accurately identifying what it is that they're maximizing for. assuming that we can make a "firewall" that maximizes for neutrality or objectivity is, to my mind, no less problematic than the systems that already exist, because it makes the same assumption: that we can build computational systems that reliably and robustly curate human social networks in ways that are provably beneficial, "neutral", or unbiased. that just isn't a power that computers have, nor is it something we should want as beings with agency and autonomy. people should have control over how their social networks function, and that control does not come from outsourcing social decisions to black-boxed machine learning algorithms controlled by corporate interests.