this post was submitted on 10 Jul 2023
62 points (91.9% liked)

Technology

59436 readers
3735 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] AzPsycho@lemmy.world 21 points 1 year ago (4 children)

I experimented by asking it to write a SQL query for a platform that has its entire database map available online. The data I asked for was impossible to get without exporting some of the data in those tables into temp tables using sub queries and then running comparative omissions analysis.

Instead of doing that it just made up fake tables and wrote a query that proclaimed the data was in these fake tables.

[–] redcalcium@c.calciumlabs.com 8 points 1 year ago

It's basically a soup-up autocomplete system. Do not expect it to apply any independent thinking at all.

[–] SJ_Zero@lemmy.fbxl.net 5 points 1 year ago

I asked it to write a review of beowulf in the style of beowulf. It wrote something rhyming which is not they style of beowulf. I said "rewrite this so it doesn't rhyme" and it gave me something rhyming. I tried several times in several different ways including reasoning with it, and it just kept on kicking out a rhyming poem.

[–] 50gp@kbin.social 3 points 1 year ago

its good to remember that many of these chatbot AIs want to give an answer to the prompt instead of saying "sorry, thats not possible" and will then generate something completely garbage as result

[–] fearout@kbin.social 2 points 1 year ago

Out of curiosity, are you using 3.5 or 4? I found that gpt4 is pretty good at these tasks, while 3.5 is almost useless. A thing that often helps is to ask it “is your answer correct?”. That seems to make it find the errors and fix them.