this post was submitted on 10 Apr 2024
1286 points (99.0% liked)

Programmer Humor

19463 readers
1113 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] halloween_spookster@lemmy.world 42 points 6 months ago (1 children)

I once asked ChatGPT to generate some random numerical passwords as I was curious about its capabilities to generate random data. It told me that it couldn't. I asked why it couldn't (I knew why it was resisting but I wanted to see its response) and it promptly gave me a bunch of random numerical passwords.

[–] NucleusAdumbens@lemmy.world 9 points 6 months ago (2 children)

Wait can someone explain why it didn't want to generate random numbers?

[–] ForgotAboutDre@lemmy.world 58 points 6 months ago (2 children)

It won't generate random numbers. It'll generate random numbers from its training data.

If it's asked to generate passwords I wouldn't be surprised if it generated lists of leaked passwords available online.

These models are created from masses of data scraped from the internet. Most of which is unreviewed and unverified. They really don't want to review and verify it because it's expensive and much of their data is illegal.

[–] dukk@programming.dev 15 points 6 months ago

Also, researchers asking ChatGPT for long lists of random numbers were able to extract its training data from the output (which OpenAI promptly blocked).

Or maybe that’s what you meant?

[–] Natanael 5 points 6 months ago

It's training and fine tuning has a lot of specific instructions given to it about what it can and can't do, and if something sounds like something it shouldn't try then it will refuse. Spitting out unbiased random numbers is something it's specifically trained not to do by virtue of being a neural network architecture. Not sure if OpenAI specifically has included an instruction about it being bad at randomness though.

While the model is fed randomness when you prompt it, it doesn't have raw access to those random numbers and can't feed it forward. Instead it's likely to interpret it to give you numbers it sees less often.