Fluffles

joined 1 year ago
[โ€“] Fluffles@pawb.social 12 points 1 year ago (7 children)

I believe this phenomenon is called "artificial hallucination". It's when a language model exceeds its training and makes info out of thin air. All language models have this flaw. Not just ChatGPT.

[โ€“] Fluffles@pawb.social 3 points 1 year ago (1 children)