this post was submitted on 10 Apr 2024
437 points (100.0% liked)

Technology

37712 readers
182 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] xthexder@l.sw0.com 12 points 7 months ago* (last edited 7 months ago)

This graph actually shows a little more about what's happening with the randomness or "temperature" of the LLM.
It's actually predicting the probability of every word (token) it knows of coming next, all at once.
The temperature then says how random it should be when picking from that list of probable next words. A temperature of 0 means it always picks the most likely next word, which in this case ends up being 42.
As the temperature increases, it gets more random (but you can see it still isn't a perfect random distribution with a higher temperature value)