this post was submitted on 15 Oct 2024
-9 points (28.6% liked)

No Stupid Questions

35822 readers
1100 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I typed all this up for someone who posted a.... very strangely written question regarding something they noticed with AI, but it appears to be deleted/removed.... and, well, I wanna know if I got their question rephrased in a less... difficult to understand format. And then the answer to said question, because I find it interesting as well.

What I typed in response:

After parsing the insanity that is your writing style and... English as a second language? Allow me to confirm and summarize, because I find this question fascinating.

You've come across a LLM trend associated with said LLM being given instruction to describe/pretend to be a human named Delilah. LLM's have gone viral at times for being instructed to formulate their output to sound like famous people with what appears to be resonable accuracy. But what goes into that ability is human words previously written associated with that person (or rather, their full name/titles/etc), as well as purposful restrictions given to the LLM directly (like, don't output the N word).

Another lesser/totally unquantifiable factor in the output's "tone" is result of errors in the blackbox algorithm that associates the "words" (not truly words I know, but essentially) in ways that aren't what you'd expect.

(Here's where my slight confusion mostly is) Each of these "factors" associated with the tone of the output... you've given names to? Or maybe my entirely self-researched knowledge has missed an agreed-upon naming system for these "characters"? I'm not quite sure.

And now your question and qualifers : Is there a pop culture/historic person or character named Delilah who is associated with furry stuff? Because you have been looking at some of the interesting mistaken/innacurate tones adopted by a LLM, and you've noticed that asl the LLM to output as if it was Delilah, and the results are furry related. And typically this sort of issue is mostly due to overlapping/similar names in the model's training (as well as much stranger links without any explanation as to how they formed). And you're research on "Delilah" hasn't turned up anything giving reason for the LLM's furry related output

.... is that more or less what you are saying?

you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 16 points 1 month ago* (last edited 1 month ago) (9 children)

If you post strangely written questions on social media, you probably also type in strangely written text into AI. And in turn the AI will be confused and generate some random text. For example about furries or some other random topic. If it's an AI service that's made for erotic roleplay, it'll be more likely than if you tried the same thing with ChatGPT.

You should ask this question in one of the AI communities, however. And not on No stupid questions.

And it's better not to use derogatory language.

[–] KillingAndKindess@lemmy.blahaj.zone -1 points 1 month ago (1 children)

The user gave no reason to assume anything of that, nor did my description of the post, and may find the suggestion upsetting. Not going to go all PC 5-0 om you, but did want to distance myself from said assumption.

[–] hendrik@palaver.p3x.de 6 points 1 month ago* (last edited 1 month ago) (2 children)

Sorry, I'm also not a native speaker. I don't know what PC 5-0 means (political correctness police??). But if we want to know what happened, we need to know the circumstances. It'll be a big difference which exact LLM model got used. We need to know the exact prompt and text that went in. And then we can start discussing why something happened. I'd say a good chance is the LLM has been made to output stories like that. Like it's the case with LLM models that have been made for ERP. That's why I said that.

[–] KillingAndKindess@lemmy.blahaj.zone 0 points 1 month ago (1 children)

Oh, Hmmm, thats a rather interesting route I didn't think to go down. Most of my interests amd consumed content on AI has been through videos/explanations by people much smarter than I, and not really through use of any LLM's in any sort of manner except a few exchanges with a few of OpenAi's models over the last few years. Didn't even consider that those sorts of things were a common thing.

My limited LLM knowledge does lead me to believe that both interpretations of the question would more or less boil down to the same thing though. A little search engine hunting of my own has also come up empty, and I'm curious if this one of those super interesting and crazy associated token relationships, or if there is just a crapload of content I can't find.

[–] hendrik@palaver.p3x.de 2 points 1 month ago* (last edited 1 month ago) (1 children)

I don't think it's necessary to distance oneself from doing said roleplay. I bet society is looking down on individuals doing it. But I think it's perfectly fine. As long as it stays somewhat healthy and no one gets harmed.

There is a considerable group of people who do roleplay with AI. Or have "virtual girlfriends" or companions. It all started with Replika AI. Nowadays there are other services for that. And these LLMs are made to be lewd and suggestive. Including all kinds of niche interests. You'll find several articles about it if you google virtual grilfrends or AI companions. It's more or less being discussed in some niche areas of the internet, since there is a stigma to it.

[–] KillingAndKindess@lemmy.blahaj.zone 1 points 1 month ago (1 children)

Oh, I'd have no shame using that kind of thing, FFS I think having a fursona seems fun and liberating if not for the horrible amount of sweat that has gotta be involved.

I just was trying to say I made my attempt at rephrasing based on not knowing those were really a thing, and that additional possibility/context might have adjusted what I remember reading.

I refuse to Ick anyone's consensual yum, even the really far out there stuff that isn't for me, and I hate when others do. Being a transwoman, I'm no stranger to being reduced to a fetish to be icked.

Fuck that shit, and do it in a furry suit if you want lol

[–] hendrik@palaver.p3x.de 1 points 1 month ago

Agreed. That's the spirit. I always don't get why some people think differently. I mean other people's life is none of my business. And the core rules to sexuality (and life in general) are very simple: We need consent from all parties and no one should get harmed. And that's about it. Everything else is kind of individual and we all like different things.

[–] KillingAndKindess@lemmy.blahaj.zone -1 points 1 month ago (1 children)

Oh, and PC 5-0

PC - politically correct (a very... wide term)

5-0 is a colloquial term meaning Police.

Idk how non-native English your internet consumption is, but just straight up saying PC Police... is just something I'd like to not continue the use of.

[–] hendrik@palaver.p3x.de 1 points 1 month ago (1 children)

Alright. Thx for the explanation. Yeah, I don't have a filter. I just say whatever I think. Don't really care if it's offensive, just if things are true or not. Which is hard to tell in this case, since we don't have enough information at hand. And LLMs are complex. Could be a fluke. Or whatever.

NP.

I watch my own words, but really try to not attempt to stifle some elses. You do you boo boo

load more comments (7 replies)