this post was submitted on 13 Aug 2023
907 points (97.8% liked)

Technology

59582 readers
4407 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

you are viewing a single comment's thread
view the rest of the comments
[–] HexesofVexes@lemmy.world 127 points 1 year ago (33 children)

Prof here - take a look at it from our side.

Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

I am not arguing exams are perfect mind, but I'd rather doubt a few student's inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).

Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.

[–] maegul@lemmy.ml 10 points 1 year ago (12 children)

Here's a somewhat tangential counter, which I think some of the other replies are trying to touch on ... why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?

In a world where something like AI can come up and change the landscape in a matter of a year or two ... how much value is left in the idea of assessing people's value through exams (and to be clear, I'm saying this as someone who's done very well in exams in the past)?

This isn't to say that knowing things is bad or making sure people meet standards is bad etc. But rather, to question whether exams are fit for purpose as means of measuring what matters in a world where what's relevant, valuable or even accurate can change pretty quickly compared to the timelines of ones life or education. Not long ago we were told that we won't have calculators with us everywhere, and now we could have calculators embedded in our ears if wanted to. Analogously, learning and examination is probably being premised on the notion that we won't be able to look things up all the time ... when, as current AI, amongst other things, suggests, that won't be true either.

An exam assessment structure naturally leans toward memorisation and being drilled in a relatively narrow band of problem solving techniques,^1^ which are, IME, often crammed prior to the exam and often forgotten quite severely pretty soon afterward. So even presuming that things that students know during the exam are valuable, it is questionable whether the measurement of value provided by the exam is actually valuable. And once the value of that information is brought into question ... you have to ask ... what are we doing here?

Which isn't to say that there's no value created in doing coursework and cramming for exams. Instead, given that a computer can now so easily augment our ability to do this assessment, you have to ask what education is for and whether it can become something better than what it is given what are supposed to be the generally lofty goals of education.

In reality, I suspect (as many others do) that the core value of the assessment system is to simply provide a filter. It's not so much what you're being assessed on as much as your ability to pass the assessment that matters, in order to filter for a base level of ability for whatever professional activity the degree will lead to. Maybe there are better ways of doing this that aren't so masked by other somewhat disingenuous goals?

Beyond that there's a raft of things the education system could emphasise more than exam based assessment. Long form problem solving and learning. Understanding things or concepts as deeply as possible and creatively exploring the problem space and its applications. Actually learn the actual scientific method in practice. Core and deep concepts, both in theory and application, rather than specific facts. Breadth over depth, in general. Actual civics and knowledge required to be a functioning member of the electorate.

All of which are hard to assess, of course, which is really the main point of pushing back against your comment ... maybe we're approaching the point where the cost-benefit equation for practicable assessment is being tipped.


  1. In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams ... which I think tells you pretty clearly what kind of task an exam actually is ... a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.
[–] HexesofVexes@lemmy.world 64 points 1 year ago (11 children)

Ah the calculator fallacy; hello my old friend.

So, a calculator is a great shortcut, but it's useless for most mathematics (i.e. proof!). A lot of people assume that having a calculator means they do not need to learn mathematics - a lot of people are dead wrong!

In terms of exams being about memory, I run mine open book (i.e. students can take pre-prepped notes in). Did you know, some students still cram and forget right after the exams? Do you know, they forget even faster for courseworks?

Your argument is a good one, but let's take it further - let's rebuild education towards an employer centric training system, focusing on the use of digital tools alone. It works well, productivity skyrockets, for a few years, but the humanities die out, pure mathematics (which helped create AI) dies off, so does theoretical physics/chemistry/biology. Suddenly, innovation slows down, and you end up with stagnation.

Rather than moving us forward, such a system would lock us into place and likely create out of date workers.

At the end of the day, AI is a great tool, but so is a hammer and (like AI today), it was a good tool for solving many of the problems of its time. However, I wouldn't want to only learn how to use a hammer, otherwise how would I be replying to you right now?!?

load more comments (11 replies)
[–] CapeWearingAeroplane@sopuli.xyz 17 points 1 year ago (1 children)

I think a central point you're overlooking is that we have to be able to assess people along the way. Once you get to a certain point in your education you should be able to solve problems that an AI can't. However, before you get there, we need some way to assess you in solving problems that an AI currently can. That doesn't mean that what you are assessed on is obsolete. We are testing to see if you have acquired the prerequisites for learning to do the things an AI can't do.

[–] maegul@lemmy.ml 3 points 1 year ago (1 children)
  1. AI isn’t as important to this conversation as I seem to have implied. The issue is us, ie humans, and what value we can and should seek from our education. What AI can or can’t do, IMO, only affects vocational aspects in terms of what sorts of things people are going to do “on the job”, and, the broad point I was making in the previous post, which is that AI being able to do well at something we use for assessment is an opportunity or prompt to reassess the value of that form of assessment.
  2. Whether AI can do something or not, I call into question the value of exams as a form of assessment, not assessment itself. There are plenty of other things that could be used for assessment or grading someone’s understanding and achievement.
  3. The real bottom line on this is cost and that we’re a metric driven society. Exams are cheap to run and provide clean numbers. Any more substantial form of assessment, however much they better target more valuable skills or understanding, would be harder to run. But again, I call into question how valuable all of what we’re doing actually is compared to what we could be doing, however more expensive and whether we should really try to focus more on what we humans are good at (and even enjoy).

AI can't do jack shit with any meaningful accuracy anyway so it's stupid to compare human education to AI blatantly making shit up like it always does

[–] ZzyzxRoad@lemm.ee 10 points 1 year ago (1 children)

Here's a somewhat tangential counter, which I think some of the other replies are trying to touch on ... why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?

My theory prof said there would be paper exams next year. Because it's theory. You need to be able to read an academic paper and know what theoretical basis the authors had for their hypothesis. I'm in liberal arts/humanities. Yes we still exist, and we are the ones that AI can't replace. If the whole idea is that it pulls from information that's already available, and a researcher's job is to develop new theories and ideas and do survey or interview research, then we need humans for that. If I'm trying to become a professor/researcher, using AI to write my theory papers is not doing me or my future students any favors. Ststistical research on the other hand, they already use programs for that and use existing data, so idk. But even then, any AI statistical analysis should be testing a new hypothesis that humans came up with, or a new angle on an existing one.

So idk how this would affect engineering or tech majors. But for students trying to be psychologists, anthropologists, social workers, professors, then using it for written exams just isn't going to do them any favors.

[–] maegul@lemmy.ml 3 points 1 year ago

I also used to be a humanities person. The exam based assessments were IMO the worst. All the subjects assessed without any exams were by far the best. This was before AI BTW.

If you’re studying theoretical humanities type stuff, why can’t your subjects be assessed without exams? That is, by longer form research projects or essays?

[–] dragonflyteaparty@lemmy.world 8 points 1 year ago (1 children)

As they are talking about writing essays, I would argue the importance of being able to do it lies in being able to analyze a book/article/whatever, make an argument, and defend it. Being able to read and think critically about the subject would also be very important.

Sure, rote memorization isn't great, but neither is having to look something up every single time you ever need it because you forgot. There are also many industries in which people do need a large information base as close recall. Learning to do that much later in life sounds very difficult. I'm not saying people should memorize everything, but not having very many facts about that world around you at basic recall doesn't sound good either.

load more comments (1 replies)
[–] tony@lemmy.hoyle.me.uk 3 points 1 year ago (9 children)

It's an interesting point.. I do agree memorisation is (and always has been) used as more of a substitute for actual skills. It's always been a bugbear of mine that people aren't taught to problem solve, just regurgitate facts, when facts are literally at our fingertips 24/7.

[–] maegul@lemmy.ml 3 points 1 year ago

Yea, it isn’t even a new problem. The exam was questionable before AI.

load more comments (8 replies)
load more comments (7 replies)
[–] MNByChoice@midwest.social 7 points 1 year ago (3 children)

Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students

I get that this is a quick post on social media and only an antidote, but that is interesting. What do you think the connection is? AI, anxiety, or something else?

[–] HexesofVexes@lemmy.world 10 points 1 year ago (3 children)

It's a tough one because I cannot say with 100% certainty that AI is the issue. Anxiety is definitely a possibility in some cases, but not all; perhaps thinking time might be a factor, or even just good old copying and then running the work through a paraphraser. The large amount of absenses also means it was hard to benchmark those students based on class assessment (yes, we are always tracking how you are doing in class, not tp judge you, but just in case you need some extra help!).

However, AI is a strong contender since the "open book" part didn't include the textbook, it allowed the students to take a booklet into the exams with their own notes (including fully worked examples). They scored low because they didn't understand their own notes, and after reviewing the notes they brought in (all word perfect), it was clear they did not understand the subject.

load more comments (3 replies)
[–] Kage520@lemmy.world 10 points 1 year ago

That sounds like AI. If you do your homework then even sitting in a regular exam you should score better than 20%. This exam being open book, it sounds like they were unfamiliar with the textbook and could not find answers fast enough.

[–] adavis@lemmy.world 8 points 1 year ago* (last edited 1 year ago)

Not the previous poster. I taught an introduction to programming unit for a few semesters. The unit was almost entirely portfolio based ie all done in class or at home.

The unit had two litmus tests under exam like conditions, on paper in class. We're talking the week 10 test had complexity equal to week 5 or 6. Approximately 15-20% of the cohort failed this test, which if they were up to date with class work effectively proved they cheated. They'd be submitting course work of little 2d games then on paper be unable to "with a loop, print all the odd numbers from 1 to 20"

[–] Smacks@lemmy.world 5 points 1 year ago

Graduated a year ago, just before this AI craze was a thing.

I feel there's a social shift when it comes to education these days. It's mostly: "do 500 - 1,000 word essay to get 1.5% of your grade". The education doesn't matter anymore, the grades do; if you pick something up along the way, great! But it isn't that much of a priority.

I think it partially comes from colleges squeezing students of their funds, and indifferent professors who just assign busywork for the sake of it. There are a lot of uncaring professors that just throw tons of work at students, turning them back to the textbook whenever they ask questions.

However, I don't doubt a good chunk of students use AI on their work to just get it out of the way. That really sucks and I feel bad for the professors that actually care and put effort into their classes. But, I also feel the majority does it in response to the monotonous grind that a lot of other professors give them.

[–] mrspaz@lemmy.world 4 points 1 year ago (1 children)

I recently finished my degree, and exam-heavy courses were the bane of my existence. I could sit down with the homework, work out every problem completely with everything documented, and then sit to an exam and suddenly it's "what's a fluid? What's energy? Is this a pencil?"

The worst example was a course with three exams worth 30% of the grade, attendance 5% and homework 5%. I had to take the course twice; 100% on HW each time, but barely scraped by with a 70.4% after exams on the second attempt. Courses like that took years off my life in stress. :(

[–] HexesofVexes@lemmy.world 5 points 1 year ago (1 children)

If you don't mind me asking - what kind of degree was it, and what format were the exams?

[–] mrspaz@lemmy.world 2 points 1 year ago (3 children)

Sure; it was Mechanical Engineering. The class was "Vibrations & Controls;" the first half of the course was vibrations / oscillatory systems, and then the second half was theory of feedback & control systems (classic "PID" controllers for the most part). The exams were pencil-and-paper, in-person, time-limited.

The first attempt we were allowed nothing except the exam and paper for answers; honestly I'm not sure what that professor was expecting.

In my second attempt the professor provided a formula sheet, but he was of the mindset of "If you know F=ma, you can derive anything you need!" so the formula sheets were sparse to put it mildly. It was just enough to keep me from fully collapsing in panic and bombing, but it was close.

load more comments (3 replies)
[–] Spike@feddit.de 4 points 1 year ago (1 children)

We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

Could you ever though, when giving them work they had to do not in your physical presence? People had their friends, parents or ghostwriters do the work for them all the time. You should know that.

This is not an AI problem, AI "just" made it far more widespread and easier to access.

[–] HexesofVexes@lemmy.world 4 points 1 year ago

"Sometimes" would be my answer. I caught students who colluded during online exams, and even managed to spot students pasting directly from an online search. Those were painful conversations, but I offered them resits and they were all honest and passed with some extra classes.

With AI, detection is impossible at the moment.

[–] garibaldi_biscuit@lemmy.world 2 points 1 year ago (1 children)

Student here - How does that cursive longhand thing go again?

[–] HexesofVexes@lemmy.world 4 points 1 year ago

"Avoid at all costs because we hate marking it even more than you hate writing it"?

An in person exam can be done in a locked down IT lab, and this leads to a better marking experience, and I suspect a better exam experience!

load more comments (26 replies)