this post was submitted on 15 Sep 2024
891 points (98.1% liked)

Technology

59143 readers
2942 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Telorand@reddthat.com 273 points 1 month ago (44 children)

Wow, the text generator that doesn't actually understand what it's "writing" is making mistakes? Who could have seen that coming?

I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.

[–] theterrasque@infosec.pub 1 points 1 month ago (5 children)

What llm did you use, and how long ago was it? Claude sonnet usually writes pretty good python for smaller scripts (a few hundred lines)

[–] Telorand@reddthat.com 5 points 1 month ago (4 children)

It was ChatGPT from earlier this year. It wasn't a huge deal for me that it made mistakes, because I had a very specific use case and just wanted to save some time; I knew I'd have to troubleshoot grafting it into my function, but even after I pointed out that it was using depreciated syntax (and how to correct it), it just spat out the code again with even more errors and still using depreciated syntax.

All LLMs will fail like this in some way, because they don't actually understand what they're generating (i.e. they have no mechanism for self-evaluating the veracity of their statements).

[–] theterrasque@infosec.pub -1 points 1 month ago* (last edited 1 month ago) (1 children)

This is a very simple one, but someone lower down apparently had issue with a script like this:

https://i.imgur.com/wD9XXYt.png

I tested the code, it works. If I was gonna change anything, probably move matplotlib import to after else so it's only imported when needed to display the image.

I have a lot more complex generations in my history, but all of them have personal or business details, and have much more back and forth. But try it yourself, claude have a free tier. Just try to be clear in the prompt what you want. It might surprise you.

[–] Telorand@reddthat.com 4 points 1 month ago (1 children)

I appreciate the effort you put into the comment and your kind tone, but I'm not really interested in increasing LLM presence in my life.

I said what I said, and I experienced what I experienced. Providing me an example where it works is in no way a falsification of the core of my original comment: LLMs have no place generating code for secure applications apart from human review, because they don't have a mechanism to comprehend or proof their own work.

[–] FlorianSimon@sh.itjust.works 4 points 1 month ago (1 children)

I'd also add that, depending on the language, the ways you can shoot yourself in the foot are very subtle (cf C++/C, which are popular languages for "secure" stuff).

It's already hard to not write buggy code, but I don't think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you're writing code.

Oh, and I assume it'll be tough to get an LLM to follow MISRA conventions.

[–] Telorand@reddthat.com 3 points 1 month ago

It's already hard to not write buggy code, but I don't think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you're writing code.

Definitely. That's what I was trying to drive at, but you said it well.

load more comments (2 replies)
load more comments (2 replies)
load more comments (40 replies)