this post was submitted on 04 Jan 2024
8 points (100.0% liked)
Information Security
267 readers
1 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This seems tied to the issues I've had when using LLMs, which is that it spits out what it thinks might work not what is best. Frequently I get suggestions that I need to clean up or ask follow-up guiding questions.
If I had to guess it's that, since there isn't anything enforcing quality on training data/generated text, it will tend towards the more frequent approaches and not the best.