deavid

joined 1 year ago
[–] deavid@lemmy.world 1 points 1 year ago

so far most models in HuggingFace are also "censored", so maybe something can be gained. But over there are "uncensored" models that can be used instead.

[–] deavid@lemmy.world 0 points 1 year ago (3 children)

Large language models from corporations like OpenAI or Google need to limit the abilities of their AIs to prevent users from receiving potentially harmful or illegal instructions, as this could lead to a lawsuit.

So for example if you ask it how to break into a car or how to make drugs, the AI will reject the request and give you "alternatives".

It also happens for medical advice, and when treating the AI like a human.

Jailbreaking here refers to misleading the AI to a point that it will ignore these safeguards and tell you what you want.

[–] deavid@lemmy.world 1 points 1 year ago

Well, it is kinda expected but also very funny. Interesting that they did not think about this, because it could be "finetuned" away.

 

It's interesting that they were able to get a model with 350M parameters to outperform others with 175B parameters