this post was submitted on 23 May 2024
25 points (100.0% liked)
TechTakes
1400 readers
125 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If your issue with the result is plagiarism, what would have been a non-plagiarizing way to reproduce the information? Should the system not have reproduced the information at all? If it shouldn't reproduce things it learned, what is the system supposed to do?
Or is the issue that it reproduced an idea that it probably only read once? I'm genuinely not sure, and the original comment doesn't have much to go on.
The normal way to reproduce information which can only be found in a specific source would be to cite that source when quoting or paraphrasing it.
But the system isn't designed for that, why would you expect it to do so? Did somebody tell the OP that these systems work by citing a source, and the issue is that it doesn't do that?
"[massive deficiency] isn't a flaw of the program because it's designed to have that deficiency"
it is a problem that it plagiarizes, how does saying "it's designed to plagiarize" help????
"the murdermachine can't help but murdering. alas, what can we do. guess we just have to resign ourselves to being murdered" says murdermachine sponsor/advertiser/creator/...
Please stop projecting positions onto me that I don't hold. If what people told the OP was that LLMs don't plagiarize, then great, that's a different argument from what I described in my reply, thank you for the answer. But you could try not being a dick about it?