this post was submitted on 04 Jul 2023
122 points (96.9% liked)

Technology

59284 readers
4276 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 13 comments
sorted by: hot top controversial new old
[–] b3nsn0w@pricefield.org 24 points 1 year ago (1 children)

There simply isn't enough entropy in prose to accurately detect the use of language models if either

  1. the text is short enough, like a school essay, or
  2. there has been enough of a human-ai collaboration

Like I'm sure many of us are familiar with ChatGPT's style when asked to write text on its own or with very little prompting. However, that's just the raw style, and ChatGPT is only one of the many language models (albeit it's clearly the most accessible). If you provide example prose for the AI to imitate, for example with a simple tool like Sudowrite (the use of which tends to be the subject of many accusations) you will not pick out those segments from human-written prose unless the human using it is too lazy or too stupid to leave in obvious tells.

The sooner we let go of this comfortable fantasy that AI somehow leaves in easy to isolate markers that enable a different (and vastly inferior) AI model to tell if the text was AI or not, the better. The simple truth is that if that was the case, AI companies would use the same isolation strategies to teach their AI to imitate human prose better, therefore breaking detection.

And with a high chance for false positives we're just going to recreate a cyberpunk version of the Salem witch trials. Because we simply have no proof -- if you don't trust ChatGPT with anything important, why would you trust a vastly less sophisticated AI, or something that amounts to a gut feeling, to condemn people?

[–] Iteria@sh.itjust.works 3 points 1 year ago

I think that for school, assignments will just evolve. We'll go back to in classroom essays probably. We'll also see the nature of assignments change. I remember when I was in high school and the internet was in its infancy. My history just gave us more specific topics to write on to force us to use books. My young cousin who came a decade behind me. Found himself with topics that necessitated using scholarly sources. Wikipedia wasn't gonna cut it. I imagine in grade school it will be like that with assignments that need some kind of human invention while allowing for inescapable technology use.

I went to a top engineering college, to say that cheating wad rampant and creative was an understatement. I thought how the professors got around that was fascinating in retrospect. Some assignments were intentionally unpassable. If you got a passing grade, you failed. Your curved grade was based on standard distribution of not only your class, but every other class that had every been.

Projects and in person things were super common. Assignments were keyed to you as a person and no one in the class had the exact same assignment. For that reason collaboration in project based classes was expected and encouraged.

The one thing AI will never have is discretion and yoi can't get their output with a linked computer. I look forward to seeing how schools adapt their assessments around these facts.

[–] manitcor@lemmy.intai.tech 19 points 1 year ago

AI detectors are a scam and have been from the start. Defeating them is trivial.

[–] Steveanonymous@lemmy.world 13 points 1 year ago (1 children)

Saw the pic. Saw the AI. Saw weirdness.

Thought weird Al had something to say about ai

[–] usualsuspect191@lemmy.ca 6 points 1 year ago

That's why I hate fonts that barely differentiate l and I

[–] bluemellophone@lemmy.world 11 points 1 year ago (1 children)

Using AI with humans-in-the-loop is a fantastically powerful combination. We can solve many problems with smart system design that humans alone cannot achieve.

Source: PhD on automated animal censusing: we can build visual databases of individual animals for passive and long-term conservation work.

[–] dojan@lemmy.world 26 points 1 year ago (1 children)

Absolutely, but that’s not what she’s saying. She’s saying that the products that tout the capability to detect the usage of LLMs to cheat on essays and the like are really rubbish and give a lot of false positives.

She mentions that they’re particularly inaccurate when it comes to English-as-second-language speakers, meaning foreign/exchange students are more likely to get marked as cheaters even though they might not be.

I think the issue is that our education system is dated. Grades in general aren’t effective measures of knowledge, and they suck as motivators.

AI tools aren’t going anywhere so the education system will need to find a way of working with them. It’s time to modernise.

[–] reverie@lemmy.world 11 points 1 year ago

Exactly what someone who wrote their book with AI would say..

/s

[–] downpunxx@lemmy.world 1 points 1 year ago

"nobody here but us chickens"