this post was submitted on 26 Aug 2023
817 points (91.5% liked)
Programmer Humor
19512 readers
394 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The fine tuning, while much more efficient than starting fresh, can still be a large amount of work.
Then consider that your target corpus of data may also be large.
Then consider to do your reasoning tasks across that corpus also takes strong hardware to get production ready response times.
No, openai isn't using inferior hardware, but their model goals, token chunking strategies and overall corpus are generalist in nature.
There are then processing strategies teams are using to go beyond the "memory" limitations gpt 4 has, that provide massive benefits to coherency, essentially anti hallucination and better overall reasoning