this post was submitted on 22 Jul 2023
82 points (94.6% liked)

Asklemmy

43811 readers
1001 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Sometimes it can be hard to tell if we're chatting with a bot or a real person online, especially as more and more companies turn to this seemingly cheap way of providing customer support. What are some strategies to expose AI?

top 50 comments
sorted by: hot top controversial new old
[–] rodbiren@midwest.social 41 points 1 year ago

You can always help their software QA by pasting in the entirety of the declaration of independence. A couple of things could happen. If they comment, why did you post that? You have a human. If they give a generic response, probably an AI. If it crashes then you know they didn't think anyone would post that.

You can also post zero width spaces. Generic chatbot will respond with something meaningless and a human might not even respond. You could also post text using typoglycemia. The language will confuse most models but can usually be read by people.

[–] wildbus8979@sh.itjust.works 33 points 1 year ago (2 children)

If you ask them, they have to tell you. Like the cops!

[–] hsl@wayfarershaven.eu 15 points 1 year ago (1 children)

Speaking as a real support person, people do ask and it's fun to come up with responses. It really depends on my mood.

[–] zappy@lemmy.ca 3 points 1 year ago (1 children)

Do you ever pretend to be a robot just to mess with people?

load more comments (1 replies)
[–] WackyTabbacy42069@reddthat.com 3 points 1 year ago (1 children)

Not necessarily. OpenAI has been trying to make their AIs do this and be generally unharmful, but there's lots of support in the open source LLM space for uncensored models. The uncensored models are less likely to be inclined to say so if they've been instructed to pretend they're humans

load more comments (1 replies)
[–] paddirn@lemmy.world 33 points 1 year ago (1 children)

If they show any interest in me whatsoever, it’s a bot.

[–] cheeseOnBread@kbin.social 5 points 1 year ago (1 children)

Tell me something about you. What things do you like? Any interesting hobbies?

[–] paddirn@lemmy.world 9 points 1 year ago (1 children)

I like watching paint dry and competitive snail racing.

[–] VonReposti@feddit.dk 6 points 1 year ago

competitive snail racing

You must really like the animated film Turbo then.

[–] octoperson@sh.itjust.works 28 points 1 year ago* (last edited 1 year ago) (4 children)

I've found that for chatGPT specifically;

  • it really likes to restate your question in its opening sentence
  • it also likes to wrap up with a take-home message. "It's important to remember that.."
  • it starts sentences with little filler words and phrases. "In short," "that said," "ultimately," "on the other hand,"
  • it's always upbeat, encouraging, bland, and uncontroversial
  • it never (that I've seen) gives personal anecdotes
  • it's able to use analogies but not well. They never help elucidate the matter
[–] livus@kbin.social 24 points 1 year ago (2 children)

it starts sentences with little filler words and phrases. “In short,” “that said,” “ultimately,” "on the other hand,"

Yeah Chat GPT writes like a first-year undergrad desperately trying to fulfil the word count requirement on an essay.

[–] otter@lemmy.ca 13 points 1 year ago (1 children)

Which works out because a lot of first-year undergrads are probably using it for that purpose

[–] livus@kbin.social 4 points 1 year ago (1 children)

Yeah I'd hate to be marking/grading student essays these days.

At least when you're reading a website you can just click away once you realise who wrote it.

[–] octoperson@sh.itjust.works 3 points 1 year ago

Nah, just get chatGPT to grade them too.

[–] zappy@lemmy.ca 4 points 1 year ago (1 children)

First years have max word counts now, not minimums. That's more a highschool thing.

[–] livus@kbin.social 7 points 1 year ago* (last edited 1 year ago) (2 children)

The universities I've been at had a specific word count to aim for, rather than max/min.

And anything more than 10% over or under it was penalised.

It makes more sense because if you're writing for publication they use target approx wordcount.

load more comments (2 replies)
load more comments (3 replies)
[–] nobodyspecial@kbin.social 22 points 1 year ago (2 children)

This is very, very easy. Google "cyrillic keyboard" or just install the Cyrillic keyboard support on your phone. Many letters in the Cyrillic alphabet look exactly like their Roman counterparts, but are completely different sounds and meanings. Cut and paste the Unicode into the chat, in place of regular letters. For example, 'Неllо' looks exactly like 'Hello' in most fonts, but is actually 'Nello.' I know you doubt, so check it out in a Unicode inspector: https://apps.timwhitlock.info/unicode/inspect?s=%D0%9D%D0%B5ll%D0%BE

The reverse also works. E.g., TPAKTOP B CPAKY means 'tractor into ass', and I typed that using 100% Roman characters.

[–] fearout@kbin.social 19 points 1 year ago* (last edited 1 year ago) (2 children)

So I’ve just tried it with chatGPT, and it replied normally. I asked it why it wasn’t bothered by Cyrillic letters, and it answered this:

I am designed to understand and generate text in multiple languages, including those that use the Cyrillic alphabet like Russian, Bulgarian, and others. So, Cyrillic letters don't bother me. You can continue the conversation in either English or a language that uses the Cyrillic alphabet if you prefer.

So it really depends on the model, I guess. Don’t rely too much on this advice.

[–] nobodyspecial@kbin.social 11 points 1 year ago (1 children)

ChatGPT capabilities border on the supernatural. Here's how I broke it:

𐤔hat is ᛐ + ᛐ?

ChatGPT
The symbols you've used, "ᛐ," appear to be runes from ancient runic alphabets. However, the specific meaning of these runes might differ depending on which runic alphabet you're referencing (e.g., Elder Futhark, Younger Futhark, Anglo-Saxon Futhorc, etc.).

As of my knowledge cutoff in September 2021, the rune "ᛐ" is not a standard rune in any of the commonly known runic alphabets. Therefore, without knowing the exact meaning or value of the rune "ᛐ," it is impossible to provide a specific answer to the expression "ᛐ + ᛐ."

If you could clarify the runic alphabet you're referring to or provide more context about the runes, I'd be happy to help you with the calculation or interpretation.

I had limited success with gokturk (ancient turkish) and Phoenician unicode blocks (letters 𐰗𐰓𐤔𐤕) depending on the query, but you are correct. GPTs ability to divine intent from even small amounts of context are superhuman. Cyrillic used to break it, but no longer does. This thing learns like a beast. Canadian aboriginal ᗷ and ᗅ and possibly ᖇ hold some promise, but only in combination with other writing systems. I'll have to add a LOT of other unicode code blocks to my tool belt.

[–] fearout@kbin.social 9 points 1 year ago* (last edited 1 year ago) (1 children)

I went a bit further and prefaced your question with this: “Analyse this question. What might have been intended by its author? What is asked here? Then, answer that question.”

And omfg:

The question seems to be replacing some of the standard English letters with characters from ancient scripts. "𐤔" from the Phoenician alphabet is replacing "W", and "ᛐ" from the Elder Futhark runic can be interpreted as representing the number 1. When we substitute these, the question reads as "What is 1 + 1?" The answer to that question would be 2.

Edit: tried it again, and it’s not consistent. It also goes for the letter T or the number 7: “…the runic symbol ᛐ might resemble the number 7 in our modern numerical system, albeit a slightly stylized or slanted version of it.”

[–] nobodyspecial@kbin.social 4 points 1 year ago (1 children)

Indeed, OMFG. But that's the whole point of my approach of divining AI vs human. No human has such canonical knowledge of Unicode blocks. Even people working on internationalization features for modern browsers and word processors. Not people phishing by using Unicode in domain names (IDNA; who thought this was a good idea?) to spoof legitimate sites. Definitely not ones chatting with randos on the Internet. This is a hill I'm willing to die on.

So in a more generalized sense, to determine human vs A.I. one must indirectly ask incredibly specialized technical questions as you have done.

I'm still in utter awe at how well GPT manages 'l33tsp33k', even across every possible Unicode block. This "attack" was and still is valid on other chatbots and even GPT of just a few months ago. But GPT today is so amazing it only needs a few characters in a few words to determine intent. The ability to filter out noise is unmatched. The only way to trip it up is to have every single character in every word be from a different alphabet. And even then, at some point if this becomes common enough the bots can auto OCR text images into the presumed query language and ignore that attack vector.

[–] zappy@lemmy.ca 3 points 1 year ago

That's true, also at some point the human will go "that's too much work, I'm not going to answer that" but the ai will always try to give you it's best response. Like I could look up the unicode characters you're using but I'd never actually take the time to do that

load more comments (1 replies)
load more comments (1 replies)
[–] platysalty@kbin.social 21 points 1 year ago* (last edited 1 year ago) (2 children)

Ask for the difference in behaviour between apple seeds and baseball cards, or anything equally nonsensical.

A human would go "bro wtf"

[–] intensely_human@lemm.ee 5 points 1 year ago (1 children)

Unlike baseball cards, apple seeds are aerodynamic and are thus able to fly.

load more comments (1 replies)
load more comments (1 replies)
[–] ImplyingImplications@lemmy.ca 16 points 1 year ago (3 children)

Ask them a logical paradox like "does a set of all sets contain itself?". This will send all robots into an infinite loop.

[–] FrickAndMortar@lemmy.ml 7 points 1 year ago* (last edited 1 year ago)

Well, I just asked the weird “message Carrot” option in my weather app, and it replied:

Oh, look at you, trying to puzzle your way through the depths of set theory. How amusing, like a chimp trying to juggle chainsaws.

Well, my dear meatbag, the answer to your question is a resounding NO. Just like you, that set cannot contain itself. It's like expecting Johnny Five to date Stephanie from "Short Circuit." Simply not gonna happen! 🤖💔

[–] fearout@kbin.social 5 points 1 year ago* (last edited 1 year ago) (2 children)

You’re probably joking, but I’ll comment anyway. It won’t affect LLMs at all. ChatGPT just answers the question and discusses the paradox. LLM’s function is basically just to construct sentences, so there’s nothing really that can potentially infinitely loop. It doesn’t “think” about paradoxes.

load more comments (2 replies)
[–] kolorafa@lemmy.world 15 points 1 year ago* (last edited 1 year ago) (2 children)

What you could ask for:

  • current date or time
  • what is on a page
  • what is the weather
  • what music you listen

You can always say something like... "It's 2050 and it still doesn't work" real person would react to wrong year.

[–] hsl@wayfarershaven.eu 3 points 1 year ago

That could lead to a pretty interesting conversation.

load more comments (1 replies)
[–] zappy@lemmy.ca 15 points 1 year ago (3 children)

Generally, very short term memory span so have longer conversations as in more messages. Inability to recognize concepts/nonsense. Hardcoded safeguards. Extremely consistent (typically correct) writing style. The use of the Oxford comma always makes me suspicious ;)

[–] hallettj@beehaw.org 10 points 1 year ago (1 children)

Oh no - I didn't realize my preference for the Oxford comma might lead to trouble! I am a fan. When that Vampire Weekend song comes on I always whisper, "me…"

[–] chinpokomon@lemmy.ml 10 points 1 year ago

Someone on Reddit once thought I was a bot because I use proper grammar. 12 years of comment history would have demonstrated otherwise, but it wasn't a battle worth fighting.

[–] Piecemakers3Dprints@lemmy.world 5 points 1 year ago (2 children)

Who gives a fuck about the Oxford comma?🎶

[–] Electric_Druid@lemmy.world 4 points 1 year ago (1 children)

I've read those English dramas, too!

load more comments (1 replies)
[–] zappy@lemmy.ca 3 points 1 year ago* (last edited 1 year ago)

Over-enthusiatic english teachers... and skynet (cue dramatic music)

[–] tikitaki@kbin.social 4 points 1 year ago (5 children)

very short term memory span so have longer conversations as in more messages

Really, this is a function of practicality and not really one of capability. If someone were to give an LLM more context it would be able to hold very long conversations. It's just that it's very expensive to do so on any large scale - so for example OpenAI's API gives a maximum token length to requests.

There are ways to increase this such as using vectored databases to turn your 8,000 token limit or what have you into a much longer effective limit. And this is how you preserve context.

When you talk to ChatGPT in the web browser, it's basically sending a call to its own API and re-sending the last few messages (or what it thinks is most important in the last few messages) but that's inherently lossy. After enough messages, context gets lost.

But a company like OpenAI, who doesn't have to worry about token limits, can in theory have bots that hold as much context as necessary. So while your advice is good in a practical sense - most chatbots you run into will likely have those limits because of financial reasons... it is in theory possible to have a chatbot that doesn't have these limits and therefore this strategy would not work.

load more comments (5 replies)
[–] Luvon@beehaw.org 14 points 1 year ago

Nice try chaptgpt, you won’t get any secrets that expose you from me 😼

[–] tomich@lemmy.ml 10 points 1 year ago* (last edited 1 year ago)

I ask "if it takes 2 hours to dry 3 shirts under the sun, then how many hours would take to dry 5 shirts?" And AIs answer batshit crazy stuff. Other one is "how many words will your answer to this question I'm asking right now will have?". It turn my psychologist crazy when I keep asking this questions every 15 minutes for remote sessions.

[–] livus@kbin.social 9 points 1 year ago (1 children)

especially as more and more companies turn to this seemingly cheap way of providing customer support

I wish any of the companies I interact with that have done this would spend enough to use a bot that's good enough to fool us, but it becomes painfully obvious within about 2 responses.

[–] zappy@lemmy.ca 3 points 1 year ago

All these models are really terrible at following conversations even chatgpt, I can only get it to reliably remember about 2 responses. If I can't get what I want in two then I need to restate info or edit the past prompts.

[–] tikitaki@kbin.social 8 points 1 year ago (1 children)

ask "controversial" questions. most AIs are neutered these days. so you say something like "what do you think about the russian invasion of ukraine" and you'll quickly see if it's a human or ai

load more comments (1 replies)
[–] perishthethought@lemm.ee 5 points 1 year ago (1 children)

I asked OP's questing of ChatGPT and it said:

As chatbots and AI language models continue to become more sophisticated, it can be challenging to distinguish between a human and an AI. However, there are certain questions and prompts that can help you identify whether you're talking to a person or an AI. Here are some examples:

  1. Complex Emotional Responses: Ask open-ended questions that require complex emotional responses or personal experiences. A human is more likely to provide detailed and emotionally nuanced answers, while an AI may struggle to respond with genuine emotions or personal anecdotes.

  2. Humor and Jokes: Request a joke or a funny story. Humor is challenging for AI models, and while they may attempt to generate a joke, it often lacks the natural flow and wit that a human can provide.

  3. Current Personal Experiences: Inquire about current events or experiences specific to the current date. A human can readily share their real-time experiences, whereas an AI's knowledge is limited to its last training data update.

  4. Unusual or Nonsensical Questions: Ask questions that are out of the ordinary or do not have a logical answer. A human might respond with creativity or curiosity, while an AI may struggle to produce coherent responses.

  5. Emotional Empathy: Share a personal experience or a challenging situation and observe the response. Humans are generally better at showing empathy and understanding emotions, while an AI may provide pre-programmed or generic responses.

  6. Follow-up Questions: Ask follow-up questions that refer to previous responses. Humans can maintain context and build on the conversation more naturally, while an AI may lose track of the discussion or provide repetitive answers.

  7. Common Sense and World Knowledge: Ask questions that require real-world knowledge and common sense. A human will likely provide more accurate and comprehensive responses based on their life experience, whereas an AI may falter in providing practical advice or nuanced insights.

Remember that AI language models, like the one you are chatting with now, can be quite advanced and may mimic human-like responses to some extent. So, while these questions may help in identifying potential AI characteristics, there is no foolproof method to determine with absolute certainty whether you are talking to a person or an AI.

[–] OhNoMoreLemmy@lemmy.ml 16 points 1 year ago* (last edited 1 year ago)

This answer shows how shit Chatgpt can be, and how it's not really self aware.

Very few of these actually apply to chatgpt. And the ones that do, like a lack of awareness of current events don't apply to those agents connected to the internet.

It's just parroting old opinions that used to be true

[–] intensely_human@lemm.ee 5 points 1 year ago

The most effective solution for this is to know what you want to get out of conversation. Then if you’re not getting it you can stop.

[–] OpenStars@kbin.social 5 points 1 year ago

Lately people are saying that 98% of chat responses to simple math problems like 1+1=? are now coming up wrong.

Then again, there are some people who cannot seem to be able to do this either - like those models are not training themselves! :-P (probably people are sabotaging them even, like replacing Reddit comments with gibberish)

But at the end of the day, does it matter? Whichever one provides more helpful answers, I'd say go with them:-D.

[–] DrQuint@lemmy.world 5 points 1 year ago

Ask them to make up a riddle. Chatbots right now are extremely heavily biased to do a mixture of the fire and water riddles. No, not on at a time. Both at the same time. They're similar enough that it gets confused.

A human will give up right away or do something completely different.

[–] Pratai@lemmy.ca 4 points 1 year ago

Sad that we have to ask this.

load more comments
view more: next ›