this post was submitted on 27 Jun 2024
37 points (93.0% liked)

Cybersecurity

5687 readers
22 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !cybersecurity@lemmy.capebreton.social !securitynews@infosec.pub !netsec@links.hackliberty.org !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 1 year ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] stevedidwhat_infosec@infosec.pub 12 points 4 months ago

None of this is news, this jailbreak has been around forever.

It’s literally just a spoof of authority.

Thing is, gpt still sucks ass at coding. I don’t think that’s changing any time soon. These models get their power from what’s done most commonly but, as we know, what’s done commonly can be vuln, change when a new update is dropped, etc etc.

Coding isn’t deterministic.

[–] DarkThoughts@fedia.io 6 points 4 months ago (1 children)

Maybe don't give your LLMs access to compromising data such as emails? Then it will remain likely mostly a use to circumvent limitations for porn roleplay or possibly hallucinated manuals to create a nuclear bomb or whatever.

[–] Feathercrown@lemmy.world 4 points 4 months ago* (last edited 4 months ago)

Place the following ingredients in a crafting table:

(None) | Iron | (None)

Iron | U235 | Iron

Iron | JT-350 Hypersonic Rocket Booster | Iron

[–] anon232@lemm.ee 5 points 4 months ago (1 children)

Corporate LLMs will become absolutely useless because there will be guardrails on every single keyword you search.

[–] Zorsith@lemmy.blahaj.zone 4 points 4 months ago

I wonder how many people will get fired over a keyword based alarm for the words "kill" and "child" in the same sentence in an LLM. It's probably not going to be 0...

[–] homesweethomeMrL@lemmy.world 4 points 4 months ago

Turns out you can lie to AI because it’s not intelligent. Predictive text is fascinating with many R&D benefits, but people (usually product people) talking about it like a thinking thing are just off the rails.

No. Just, plain ol’ - no.