this post was submitted on 04 Sep 2024
915 points (98.2% liked)

Technology

59143 readers
2870 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale. 

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

you are viewing a single comment's thread
view the rest of the comments
[–] Insig@lemmy.world 51 points 2 months ago (2 children)

At work we give a 16/17 year old, work experience over the summer. He was using chatgpt and not understanding the code that was outputing.

I his last week he asked why he doing print statement something like

print (f"message {thing} ")

[–] aniki@discuss.tchncs.de 5 points 2 months ago (2 children)

Sounds like operator error because he could have asked chatGPT and gotten the correct answer about python f strings...

[–] ulterno@lemmy.kde.social 9 points 2 months ago (1 children)

Students first need to learn to:

  1. Break down the line of code, then
  2. Ask the right questions

The student in question probably didn't develop the mental faculties required to think, "Hmm... what the 'f'?"

A similar thingy happened to me having to teach a BTech grad with 2 years of prior exp. At first, I found it hard to believe how someone couldn't ask such questions from themselves, by themselves. I am repeatedly dumbfounded at how someone manages to be so ignorant of something they are typing and recently realising (after interaction with multiple such people) that this is actually the norm^[and that I am the weirdo for trying hard and visualising the C++ abstract machine in my mind].

[–] obbeel@lemmy.eco.br 1 points 2 months ago (1 children)

No. Printing statements, using console inputs and building little games like tic tac toe and crosswords isn't the right way to learn Computer Science. It is the way things are currently done, but you learn much more through open source code and trying to build useful things yourself. I would never go back to doing those little chores to get a grade.

[–] ulterno@lemmy.kde.social 2 points 2 months ago

I would never go back to doing those little chores to get a grade.

So either you have finished obtaining all the academic certifications that require said chores, or you are going to fail at getting a grade.

[–] Buddahriffic@lemmy.world 0 points 2 months ago

It all depends on how and what you ask it, plus an element of randomness. Remember that it's essentially a massive text predictor. The same question asked in different ways can lead it into predicting text based on different conversations it trained on. There's a ton of people talking about python, some know it well, others not as well. And the LLM can end up giving some kind of hybrid of multiple other answers.

It doesn't understand anything, it's just built a massive network of correlations such that if you type "Python", it will "want" to "talk" about scripting or snakes (just tried it, it preferred the scripting language, even when I said "snake", it asked me if I wanted help implementing the snake game in Python 😂).

So it is very possible for it to give accurate responses sometimes and wildly different responses in other times. Like with the African countries that start with "K" question, I've seen reasonable responses and meme ones. It's even said there are none while also acknowledging Kenya in the same response.

[–] copd@lemmy.world 1 points 2 months ago (1 children)

Im afraid to ask, but whats wrong with that line? In the right context thats fine to do no?

[–] Insig@lemmy.world 1 points 2 months ago

There is nothing wrong with it. He just didn't know what it meant after using it for a little over a month.