this post was submitted on 15 Sep 2023
465 points (97.2% liked)

Technology

59436 readers
3522 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] JDubbleu@programming.dev 18 points 1 year ago (1 children)

The thing is LLMs are extremely useful at aiding humans. I use one all the time at work and it has made me faster at my job, but left unchecked they do really stupid shit.

[โ€“] CoderKat@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

I agree they can be useful (I've found intelligent code snippet autocompletion to be great), but it's really important that the humans using the tool are very skilled and aware of the limitations of AI.

Eg, my usage generates only very, very small amounts of code (usually a few lines). I have to very carefully read those lines to make sure they are correct. It's never generating something innovative. It simply guesses what I was going to type anyways. So it only saved me time spent typing and the AI is by no means in charge of logic. It also is wrong a lot of the time. Anyone who lets AI generate a substantial amount of code or lets it generate code you don't understand thoroughly is both a fool and a danger.

It does save me time, especially on boilerplate and common constructs, but it's certainly not revolutionary and it's far too inaccurate to do the kinds of things non programmers tend to think AI can do.