this post was submitted on 19 Dec 2023
1323 points (97.7% liked)
Comic Strips
12519 readers
2914 users here now
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This makes a lot of assumptions though and none of which are ones that I particularly agree with.
First off, this is predicated entirely off of the assumption that AI is going to think like humans, have the same reasoning as humans/corporations and have the same goals/drive that corporations do.
This does pull the entire argument into question though. It relies on simple models to try and predict something that doesn't even exist yet. That is inherently unreliable when it comes to its results. It's hard to guess the future when you won't know what it looks like.
Decision Theory has one major drawback which is that it's based entirely off of past events and does not take random chance or unknown-knowns into account. You cannot focus and rely on "expected variations" in something that has never existed. The weather cannot be adequately predicted three days out because of minor variables that can impact things drastically. A theory that doesn't even take into account variables simply won't be able to come close to predicting something as complex and unimaginable as artificial intelligence, sentience and sapience.
Like I said.
Doubt.jpg
Why do you think that? What part of what I said made you come to that conclusion?
Oh, I see. You just want to be mean to me for having an opinion.
I worded that badly. It should more accurately say "it's heavily predicated on the assumption that AI will act in a very particular way thanks to the narrow scope of human logic and comprehension." It still does sort of apply though due to the below quote:
I disagree heavily with your opinion but no, I'm not looking to be mean for you having one. I am, however, genuinely sorry that it came off that way. I was dealing with something else at the time that was causing me some frustration and I can see how that clearly influenced the way I worded things and behaved. Truly I am sorry. I edited the comment to be far less hostile and to be more forgiving and fair.
Again, I apologize.