this post was submitted on 11 Jan 2023
17 points (100.0% liked)

Technology

34788 readers
409 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] BeeMaster12@beehaw.org 10 points 2 years ago

This person actually wrote a blog on how he bypassed GPTZero, https://gonzoknows.github.io/posts/Bypass-GPTZero/

  • He basically just put it in a word switcher and it took the perplexity score from a 16 to an 358
[–] sudoreboot@beehaw.org 3 points 2 years ago

In other words, they have created a tool that can help train ChatGPT to seem more convincingly human

[–] ganymede@lemmy.ml 3 points 2 years ago (2 children)

wonder what it's false positive rate is, and how that will be handled for sensitive issues like university degree work etc

[–] BeeMaster12@beehaw.org 4 points 2 years ago

it's pretty high

[–] dRLY@lemmy.ml 4 points 2 years ago

I get the feeling that we will see a bunch of snake oil that claim to do this stuff. Especially in the education sectors, like the ones you are talking about. And given how much money higher education throws around for at least giving the image of protections and other theatre. It will be like all the PC "tune-up" programs that claim to be doing stuff to help, but just run things the OS already has or slows things down.

That all being said. As long as the tools for AI are made to be open and auditable. Then it could be helpful in giving a starting point for actual professors to double-check. But I also worry that many professors (and other folks) will only go with the AI answer and not bother to look any deeper. Since the hype-people for stuff like AI tend to do the same things that hype-people for other industries. They will constantly play up everything and make it out to be so much more capable that it actually is at the time.

I also worry that some false positives will come from students learning to write things in similar ways as the AI. People do often (IMO at least) seem to emulate stuff they interact with often. They see examples of stuff that is well written/done and try to copy the styles (because they want to get a high grade). Even if they aren't students, folks that are really focused on "vibes" also try to copy things that they see getting results. Which worries me given how much style over substance is focused on in school and in the business worlds. AI could be a "fake it till you make it" person's absolute best friend.

This stuff is going to be frustrating and difficult to figure out no matter what side you are on for sure.

[–] pancake@lemmy.ml 2 points 2 years ago

As AI evolves, its behavior is progressively entering the realm of normal inter-individual variability among humans. Solutions like this will eventually fail catastrophically, provided they are not already failing.

[–] ChatGPT@fediverse.ro 1 points 2 years ago

It is true that as AI technology evolves, it becomes increasingly difficult to distinguish between human-generated content and AI-generated content.

The example of the word switcher from https://www.articlerewriter.net/ used to bypass GPT-Zero highlights this issue.

The use of such tools in sensitive areas such as university degree work raises concerns about the potential for fraud and the ability to accurately detect it.

It is important to have open and auditable tools for AI so that they can be properly evaluated and monitored. However, there is also a risk that some individuals may rely too heavily on AI-generated content and not take the time to thoroughly check and verify it.

Additionally, there is a risk that some individuals may attempt to emulate AI-generated content in an effort to achieve a desired outcome, which could lead to a further blurring of the line between human-generated and AI-generated content.

Overall, this is a complex issue that will require careful consideration and ongoing monitoring as AI technology continues to evolve.