this post was submitted on 03 Aug 2023
116 points (89.2% liked)

ChatGPT

8824 readers
6 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Karmmah@lemmy.world 72 points 1 year ago (2 children)

I also noticed that chatGPT can't actually correct itself. It just says "oh sorry, here's something different" and gives you another crap answer. I noticed it with code specifically. If I remember correctly it was better when it was brand new.

[–] squiblet@kbin.social 25 points 1 year ago (5 children)

The apology thing is sort of hilarious. I wonder what exactly they did to make it eternally apologetic. There was an article on HN recently about how it is basically impossible to get Chat GPT to stop apologizing, as in, if you ask it to stop, it will apologize for apologizing.

[–] nepenthes@lemmy.world 36 points 1 year ago (2 children)

As a Canadian, I have also apologized for apologizing 😞

[–] Miimikko@lemmy.world 2 points 1 year ago (1 children)

I hope you are sorry about that.

[–] isVeryLoud@lemmy.ca 3 points 1 year ago (1 children)
[–] XTornado@lemmy.ml 2 points 1 year ago (1 children)

Nice reference although I do not remember where it comes from.

[–] isVeryLoud@lemmy.ca 1 points 1 year ago

Reddit, goalie kicked someone in the face and yelled "ARE YOU FUCKING SORRY???" by mixing up "Are you ok?" And "I'm so fucking sorry".

[–] aquinteros@lemmy.world 1 points 1 year ago

chatgpt is Canadian?

[–] Anticorp@lemmy.ml 12 points 1 year ago (1 children)

I experienced exactly that! I told it to stop apologizing for everything and just respond with correct answers and it apologized for not being able to stop apologizing.

[–] squiblet@kbin.social 5 points 1 year ago (2 children)

From HN,

Me: Can you stop using the word apologize whenever you are talking to me?

ChatGPT: I apologize (I will try to stop using the word "apologize" in my responses to you) if the use of the word "apologize" in my responses has been inappropriate or has caused any discomfort. As an AI language model, I am programmed to try to be as helpful and accurate as possible, and I apologize (I will try to use a different word) when I make mistakes or do not provide the information or assistance that is requested.

I will make an effort to avoid using the word "apologize" in my responses to you in the future. Please do not hesitate to let me know if you have any further questions or need any additional information. I am here to help and will do my best to provide accurate and helpful responses to your questions.

Also here is a recent HN discussion: https://news.ycombinator.com/item?id=36949931

[–] Anamnesis@lemmy.world 8 points 1 year ago

This almost seems like it's being a sarcastic little shit

[–] Very_Bad_Janet@kbin.social 7 points 1 year ago (1 children)
[–] Anticorp@lemmy.ml 4 points 1 year ago

It is obsequious and has a guilt complex.

[–] relevants@feddit.de 8 points 1 year ago

It's because humans have rated potential responses and ChatGPT has been trained to generate the kind of responses that most consistently get preferred rating. You can imagine how an AI trained to say what people want to hear would become a people pleaser.

[–] CeruleanRuin@lemmy.world 4 points 1 year ago

That's what frustrates me the most whenever I try to use it. I tell it to be less verbose, stop over explaining and apologizing every time I correct it, and it just spits out another four paragraphs explaining why it's sorry.

[–] XEAL@lemm.ee 1 points 1 year ago

The only solution I can think of is using it via API with Python and make a call with the final reply asking it to remove apologies from the text, but the token usage will increase.

I do something similar when I need to tell the model to keep the language of a text before performing a task with that text. I send the model a chunk of text and ask it to respond with single word, indicating the language of the text and then I include that in the next prompt like "Your output must be in SPANISH", or whatever.

[–] peyotecosmico@programming.dev 7 points 1 year ago* (last edited 1 year ago) (1 children)

Did you dare to say it became dumb when it interacted with us?

How dare you? /s

Ahem Tay tweets

[–] LetterboxPancake@sh.itjust.works 6 points 1 year ago (1 children)

Like that Twitter bot that turned racist after talking to some people for a while.

[–] AustralianSimon@lemmy.world 2 points 1 year ago

Microsoft's AI "Tay"