this post was submitted on 04 Jan 2024
8 points (100.0% liked)

Information Security

267 readers
1 users here now

founded 1 year ago
MODERATORS
 

cross-posted from: https://programming.dev/post/8121843

~n (@nblr@chaos.social) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

top 2 comments
sorted by: hot top controversial new old
[โ€“] jacksilver@lemmy.world 4 points 10 months ago

This seems tied to the issues I've had when using LLMs, which is that it spits out what it thinks might work not what is best. Frequently I get suggestions that I need to clean up or ask follow-up guiding questions.

If I had to guess it's that, since there isn't anything enforcing quality on training data/generated text, it will tend towards the more frequent approaches and not the best.

[โ€“] EmperorHenry@infosec.pub 1 points 10 months ago

most likely, yes.