this post was submitted on 27 Aug 2023
9 points (90.9% liked)

Futurology

1801 readers
129 users here now

founded 1 year ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] Echo71Niner@lemm.ee 2 points 1 year ago (1 children)

The group applied 14 criteria based on human consciousness theories to AI architectures like ChatGPT, finding none likely to be conscious.... and now that ChatGPT knows the 14 criteria... it's just about time.

[–] Lugh@futurology.today 2 points 1 year ago

mmm, but as no one has a theory for how consciousness arises and ChatGPT has no independent reasoning ability & is just stuck with the existing corpus of human knowledge ........

[–] Prewash_Required@sh.itjust.works 1 points 1 year ago (2 children)

Thanks for posting this. The TLDR is that using the checklist no current AI is conscious but that there are no great barriers to one doing so in the future. Also, the article itself does not list the indicators but does link to the the academic paper which does.

An interesting point from the article is that these are indicators of human consciousness, since that's all we really know for certain. Current AI models could be as conscious as many 'lower order' living things because we just don't know how they experience the world. Given that there are 'cruelty to animals' laws on the books all over the world, I wonder how far down the checklist you have to go before 'cruelty to AI' becomes a thing.

[–] Lugh@futurology.today 1 points 1 year ago (1 children)

Current AI models could be as conscious as many ‘lower order’ living things

I find a lot of the assumptions people make about AI consciousness very puzzling.

A frequent assumption is that because consciousness has emerged from animal brain architecture, therefore it follows that it will do so from electronic circuitry. However, it's entirely possible an AGI thousands of times more capable than an average human brain could have no consciousness at all.

Consciousness might be some unusual quirk that only arises in very specific types of circumstances, and biological brains, by chance, were one of those. Who knows? As no one understands how consciousness arises, we can't say.

The stronger the claims people make about AI consciousness, the less I have confidence in them.

[–] kakes@sh.itjust.works 1 points 1 year ago

Somehow I doubt that our brains just happen to be un-simulate-able. There's no reason to think we couldn't one day replicate consciousness simply by having the resources to emulate a human brain.

[–] winky88@startrek.website 0 points 1 year ago

I'd like to think we're above some ridiculous "Cruelty to AI" sentiment as a species, but then I rewatch "Measure of a Man" and begin to loosely question the ridiculousness of said notion.