this post was submitted on 15 Apr 2024
418 points (93.2% liked)

Solarpunk

5413 readers
21 users here now

The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.

What is Solarpunk?

Join our chat: Movim or XMPP client.

founded 2 years ago
MODERATORS
 

I found that idea interesting. Will we consider it the norm in the future to have a "firewall" layer between news and ourselves?

I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said "when you will have time, there is an emotional news that does not require urgent action that you will need to digest". I feel it could become the norm.

EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.

EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as "incredibly atrocious crime done to CHILDREN and you are a monster for not caring!". The second one does feel a lot like exploit of emotional backdoors in my opinion.

you are viewing a single comment's thread
view the rest of the comments
[–] OKRainbowKid@feddit.de 6 points 6 months ago (1 children)

I see where you're coming from, but if you look up Karpathy, you'll probably come to a different conclusion.

[–] GrymEdm@lemmy.world 0 points 6 months ago* (last edited 6 months ago) (1 children)

He's talking about wanting some system to filter out Tweets that "elicit emotion" or "nudge views", comparing them to malware. I looked him up and see he's a computer scientist, which explains the comparison to malware. I assume when he's designing AI he tries to filter what inputs the model gets so as to achieve the desired results. AI acts on algorithms and prompts regardless of value/ethics and bad input = bad output, but I think human minds have much more capability to cope and assess value than modern AI. As such I still don't like the idea of sanitizing intake because I believe in resilience and processing unpleasantness as opposed to stringent filters. What am I missing?

[–] OKRainbowKid@feddit.de 0 points 6 months ago

I don't think you're missing anything. Just maybe you're taking his tweet more serious or literal than he intended. To me, it's just an interesting perspective to consider tweets that are meant to influence your opinion as malware. Sure, somebody aware of the types of "bad input" in the form of misinformation campaigns, propaganda or advertisement might not be (as) susceptible to that - but considering the average Twitter user, comparing this type of content to malware seems appropriate to me.