this post was submitted on 28 Jul 2023
-6 points (31.2% liked)

AI Infosec

771 readers
1 users here now

Infosec news and articles related to AI.

founded 1 year ago
MODERATORS
 

Anyone else getting tired of all the click bait articles regarding PoisonGPT, WormGPT, etc without them ever providing any sort of evidence to back up their claims?

They’re always talking about how the models are so good and can write malware but damn near every GPT model I’ve seen can barely write basic code - no shot it’s writing actually valuable malware, not to mention FUD malware as some are claiming.

Thoughts?

you are viewing a single comment's thread
view the rest of the comments
[–] dudewitbow@lemmy.ml 3 points 1 year ago* (last edited 1 year ago) (2 children)

Ai can definately write code. Just not without some level of supervision. Take for example, its possibly to develop a basic game using chat gpt to code modules for the game, while the "programmer" is in charge of interconnecting and telling chatgpt to make some revisons after they reach some problem. Code generation isnt outright autonomous as of the moment.

As for malware, yeah you would probably need a trained database on actual malware code to get gpt to a level on generating (really, its probably going to replicate) malware code, but the problem is that if malware codebase is public, there are already likely security patches for said exploits.

[–] DarkOlive88@infosec.pub 1 points 1 year ago (1 children)

I think that's what dark trace did for their ai cyber security engine. Is it effective? My budy has two servers exactly the same one triggers on dark trace all the time, the other doesn't. Dark trace, can't tell him what's wrong with it or why it's triggering.

[–] dudewitbow@lemmy.ml 1 points 1 year ago

I dont know as im not someone deep into cyber security or anything, but the idea makes sense. Their idea is having a machine learn whats "normal traffic" in a system and react when abnormal traffic arrives.

Their being darktraces security.