this post was submitted on 15 Oct 2024
544 points (96.9% liked)
Fuck AI
1398 readers
734 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 8 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If I used a calculator on a maths test I should only be penalised if the rules stated no calculators.
According to Oxford, they define plagiarism as,
I think that covers 100% of your argument here.
LLMs can't provide reference to their source materials without opening the business behind it to litigation. this means the LLM can't request consent.
the child, in this case, cannot get consent from the original author that wrote the content that trained the LLM, cannot get consent from the LLM, and incorporated the result of LLM plagiarism into their work and attempted to pass it off as their own.
the parents are entitled and enabling pricks and don't have legal ground to stand on.
LLMs are certainly trained without consent, but they exist to spot common patterns. It's only likely to plagiarise if that text is also similar to lots of other text.
In fact, the academic practice of references and exact quotes has actually increased the tendency of statistical models to "plagiarise".
LLM will continue to be a useful academic tool. We just have to learn how best to incorporate them into our testing.
After reading that the exam rules basically said not to use chatgpt or similar, I completely agree.
Is AI more like a calculator, or more like copy/pasting Wikipedia articles without attribution?
It's not really a calculator because it gives different answers. Newer moldels can give attribution (e.g. bing copilot).
My opinion is that LLMs are not going to go away. Testing needs to adapt to focus on the human element. Marks are no longer lost for bad handwriting.
Just like when I was a kid using Wikipedia for research when it wasn't acceptable, the expectation should be that you use it to understand the material and then follow it to the source material to read that or at least find a relevant quote that lets you repeat that wikipedia said in your own words with attribution.
Copying wiki, or copying the output of an LLM, are both similarly academically fraudulent. LLMs are just more likely to also be wrong.
Mostly Agreed. I think the "in your own words" part will be debated strongly over the next few years. Will proof of writing your own prompt be sufficient?
And what if you had an app on your phone that let you just take a picture of the question, and write out the answer it gave you? A calculator still requires that you know what to input, and at the level of math where a calculator really is just easy mode, it absolutely would specifically prohibit them.
At college level, the question setter should ensure they are testing something where this is not possible.
Do you think you should be penalised if you got ChatGPT to sit the math test for you?
No, you'd be penalising yourself (except if you got the wolfram alpha plugin working).
Professors should be setting exams that chatgpt can't hope to solve.