I'm glad it's not just me ๐ญ But, well, the grass is always greener. At least that's what I keep telling myself; maybe someday I'll believe it, lmao.
Emberwatch
Shoutout to the homies at the Citizen's Climate Lobby, where you can take like 5 minutes to fill in a mostly pre-written message to your reps and tell them to fucking act on climate legislation. Dunno if it's cool to post links, but this just takes you to it with no hassle (also don't trust strangers' links on the internet; you can google "citizens climate lobby write congress" and get to the same destination):
If you're doing nothing else about climate change, at least do this. You've got five fucking minutes to spare to help sound the alarm bell if you're browsing a comment section. Don't be a doomer who doesn't actually do anything, otherwise you're just ruining your own day for no reason.
As long as no harm is done, why worry? If it does venture into harmful territory, Street Epistemology (SE) might be a good approach, not necessarily to change his mind on the subject (which is not necessarily the goal of SE), but to help him examine his reasons for his belief.
Anthony Magnabosco on youtube has some great videos on SE, if you're interested in the topic.
I've been thinking about this for a while, and would love to have other, more knowledgeable (hopefully!) opinions on this:
I've been dwelling on how we might be able to enforce some sort of set of rules or widely agreed upon "morals" on artificial general intelligence systems, which is something that should almost certainly be distributed in order a single entity from seizing control of it (governments, private individuals, corporations, or any AGI systems), and which would also allow a potentially growing set of rules or directives that couldn't be edited or controlled by a singular actor--at least in theory.
What other considerations would need to be made? Is this a plausibly good use of this technology?