this post was submitted on 17 Jul 2023
346 points (95.5% liked)

Technology

59197 readers
2518 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations'::undefined

you are viewing a single comment's thread
view the rest of the comments
[–] TheDarkKnight@lemmy.world 30 points 1 year ago (1 children)

I work in Cybersecurity for an F100 and we've been war gaming for shit like this for a while. There are just so many unethical uses for the current gen of AI tools like this one, and it keeps me up at night thinking about the future iterations of them to be honest.

[–] anakaine@lemmy.world 4 points 1 year ago (1 children)

Treat CVEs as prompts and introduce target fingerprinting to expose CVEs. Gets you one step closer to script kidding red team ops. Not quite, but it would be fun if it could do the network part too and chain responses back into the prompt for further assessment.

[–] TheDarkKnight@lemmy.world 5 points 1 year ago

We're expecting multiple AI agents to be working concert on different parts of a theoretical attack, and you nailed it with thinking about the networking piece. While a lot of aspects of a cyber attack tend to evolve with time and technical change, the network piece tends to be more "sturdy" than others and because of this it is believed that extremely competent network intrusion capabilities will be developed and deployed by a specialized AI.

I think we'll be seeing the development of AI's that specialize in malware payloads, working with one's that have social engineering capabilities and ones with network penetration specializations, etc...all operating at a much greater competency than their human counterparts (or just in much greater numbers than humans with similar capabilities) soon.

I'm not really even sure what will be effective in countering them either? AI-powered defense I guess but still feel like that favors the attacker in the end.