this post was submitted on 26 Mar 2024
270 points (89.9% liked)

A Boring Dystopia

9721 readers
207 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 1 year ago
MODERATORS
 

Kill me now.

you are viewing a single comment's thread
view the rest of the comments
[–] Beldarofremulak@lemmy.world 120 points 7 months ago (3 children)

AI will make things better

[–] Ok_imagination@lemmy.world 112 points 7 months ago (3 children)

Hahaha, I like that having it re-read the question fixed the issue...

[–] Riven@lemmy.dbzer0.com 43 points 7 months ago (2 children)

I tried the same Ai and asked it to provide a list of 20 things, it only gave me 5. I asked for the rest and it also apologized and then provided the rest. It's weird that it stumbles at first but is able to see it's error and fix it. I wonder if it's a thing that it 'learned' from the data set. People not correctly answering prompts the first time.

[–] webghost0101@sopuli.xyz 10 points 7 months ago

Something else i also encounter with gpt4 a lot is asking “why did you do x or y” as a general curiosity of learning how it handles the task.

Almost every time it apologizes and does a fully redo avoiding x or y

[–] Gabu@lemmy.world 0 points 7 months ago

Might be an intentional limitation to avoid issues like the "buffalo" incident with GPT3 (it would start leaking information it shouldn't after repeating a word too many times).

[–] kralk@lemm.ee 15 points 7 months ago (1 children)

If only that worked on humans!

[–] echodot@feddit.uk 11 points 7 months ago

I personally don't think a large section of the population meets the requirement for general intelligence so I think it's a bit rich to expect the AI to do it as well.

[–] CuttingBoard@sopuli.xyz 0 points 7 months ago

We all know the first black man in space was George Santos.

[–] ICastFist@programming.dev 25 points 7 months ago (2 children)

I still want to know what the fucking fuck triggered "possible self harm" in your first question.

[–] echodot@feddit.uk 7 points 7 months ago (2 children)

It's weird though because they were able to point out they got to absurdity to its comment and it did agree. No it's not just algorithmic phrase matching, there is an actual "thought process" going on.

I've never been able to get an AI to explain its logic though which is a shame. I'm sure it would be useful to know why they come up with the answers they do.

[–] force@lemmy.world 8 points 7 months ago* (last edited 7 months ago)

I've never been able to get an AI to explain its logic though which is a shame. I'm sure it would be useful to know why they come up with the answers they do.

you and AI researchers both. it's probably a trillion-dollar problem at this point

[–] machinin@lemmy.world 1 points 7 months ago (1 children)

they were able to point out they got to absurdity to its comment and it did agree. No it's not just algorithmic phrase matching, there is an actual "thought process" going on.

Or it just knows to say those words when someone says "are you sure?" or something similar.

[–] echodot@feddit.uk 1 points 7 months ago (2 children)

But then it provided the correct answer so it's not just a rote response. If it was it would say no I am not sure, but then it wouldn't be able to provide the response.

[–] KISSmyOS@feddit.de 2 points 7 months ago

My guess is, when they get negative feedback they throw a bit more computing power into your instance for the second reply.

[–] machinin@lemmy.world 1 points 7 months ago

You could test it on a correct answer. Ask a question, see if it gives a correct answer, then ask "are you sure?" to see what kind of response it gives. My guess is that you won't get an answer like "yes, I'm sure, that was the correct answer."

[–] Beldarofremulak@lemmy.world 5 points 7 months ago

Just early days of AI fun. Neo probably disconnected from his goo bath or sum idk.

[–] nucleative@lemmy.world 1 points 7 months ago

Normally it ends the conversation at this point and refuses to answer any thing else, disabling the text box. At least it let you try again!