this post was submitted on 11 Aug 2023
39 points (91.5% liked)

BecomeMe

801 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 1 year ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] DarkDarkHouse@lemmy.sdf.org 10 points 1 year ago

Developers get code questions wrong 63% of the time, soooo

[–] Renneder@sh.itjust.works 5 points 1 year ago (1 children)
•  A Purdue University study found that the ChatGPT OpenAI bot gave incorrect answers to programming questions half the time. 

•  However, 39.34% of participants preferred the ChatGPT responses due to their completeness and well-formulated language style. 

•  The study also showed that users can only identify errors in ChatGPT responses when they are obvious. 

•  Participants prefer ChatGPT's responses because of its polite language, articulated textbook-style responses, and comprehensiveness. 

•  The study is intended to complement the in-depth guidance and linguistic analysis of ChatGPT responses. 

•  The authors note that ChatGPT responses contain more "driving attributes" but do not describe risks as often as Stack Overflow posts. 
[–] FlagonOfMe@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

When you put 4 spaces at the start of each line, it renders as a code block. Are you doing that on purpose? I don't see why you would want your comments to use a monospace font.

In addition, in Sync for Lemmy, it wraps words in the middle of them which makes it odd to read. Its even worse in a web browser, causing the user to need to scroll to read each line.

Screenshot:

[–] merc@sh.itjust.works 5 points 1 year ago

One of the main reasons was how detailed ChatGPT’s answers are. In many cases, participants did not mind the length if they are getting useful information from lengthy and detailed answers. Also, positive sentiments and politeness of the answers were the other two reasons.

Man, this answer is long, detailed, polite... it's great!

Sure, but it's wrong. It's just complete bullshit.

Yeah, sure.... still...

[–] kspatlas@artemis.camp 3 points 1 year ago

52% is an understatement

[–] nanoUFO@sh.itjust.works 3 points 1 year ago

glorified google search fails at answering novel or hard problems that haven't been answered before or answered badly.

[–] Qfuiyh@lemm.ee 2 points 1 year ago (3 children)

Title feels misleading, it gets stack overflow questions wrong 52% of the time

However it got 77% of easy Leetcode questions correct. Also I believe that's first try, which is not generally how chatgpt should be used.

Also also, you should probably be using a coding specific model if you want good coding results

[–] nanoUFO@sh.itjust.works 5 points 1 year ago

Every leetcode question has been answered a billion times and you train it on those billions of answers, it should get those right.

[–] OskarAxolotl@lemmy.world 3 points 1 year ago

Probably because the model has seen thousands of possible solutions to those exact Leetcode problems. Actual questions people ask on StackOverflow tend to be much more specialized.

[–] HackerJoe@sh.itjust.works 2 points 1 year ago

But it confidently explains the wrong answers.

I just hope politicians don't find out how to use it. It'll be our doom.