this post was submitted on 17 Mar 2024
462 points (95.5% liked)

Technology

59298 readers
4437 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Spuddlesv2@lemmy.ca 34 points 8 months ago (3 children)

Ahhh so the secret to using ChatGPT successfully is to tell it to give you good output?

Like “make sure the code actually works” and “don’t repeat yourself like a fucking idiot” and “don’t hallucinate false information”!

[–] Natanael 11 points 8 months ago* (last edited 8 months ago) (1 children)

Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster's qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.

But it's still not perfect, obviously. It doesn't make it stop hallucinating.

[–] FaceDeer@fedia.io 2 points 8 months ago

Yeah, you still need to give an AI's output an editing and review pass, especially if factual accuracy is important. But though some may mock the term "prompt engineering" there really are a bunch of tactics you can use when talking to an AI to get it to do a much better job. The most amusing one I've come across is that some AIs will produce better results if you offer to tip them $100 for a good output, even though there's no way to physically fulfill such a promise. The theory is that the AI's training data tended to have better stuff associated with situations where people paid for it, so when you tell the AI you're willing to pay it'll effectively go "ah, the user is expecting good quality."

You shouldn't have to worry about the really quirky stuff like that unless you're an AI power-user, but a simple request for high-quality output can go a long way. Assuming you want high quality output. You could also ask an AI for a "cheesy low-quality high-school essay riddled with malapropisms" on a subject, for example, and that would be a different sort of deviation from "average."

[–] KeenFlame@feddit.nu 0 points 8 months ago* (last edited 8 months ago)

Absolutely, it's one of the first curious things you discover when using them, such as stable diffusion "masterpiece" or the famous system prompt leaks from proprietary llms

It makes sense in how it works but in proprietary use it is mostly handled for you

Finding the right words and amount is a hilarious exercise that provides pretty good insight in the attention mechanics

Consider the "let's work step by step"

This proved a revolutionary way to system the coders as they then will structure the output better, there's then more research that happened around why this is so amazingly effective at making the model proof check itself

Predictions are obviously closely related to the action part of our brains as well, so it makes sense that it would help when you think about it

[–] kromem@lemmy.world -1 points 8 months ago

Literally yes.

For example about a year ago one of the multi step prompt papers that improved results a bit had the model guess what expert would be best equipped to answer the question in the first pass and then asked it to answer the question as that expert in the second pass and it did a better job than trying to answer it directly.

The pretraining is a regression towards the mean, so you need to bias it back towards excellence with either fine tuning or in context learning.