this post was submitted on 26 Aug 2023
817 points (91.5% liked)

Programmer Humor

19187 readers
1266 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] sacredfire@programming.dev 1 points 1 year ago (1 children)

Why does a pre-trained model need expensive private hardware after it was trained, other than to handle API requests faster? Is Open AI training chat-GPT on inferior hardware compared to these sophisticated private versions you mentioned?

[โ€“] GBU_28@lemm.ee 3 points 1 year ago

The fine tuning, while much more efficient than starting fresh, can still be a large amount of work.

Then consider that your target corpus of data may also be large.

Then consider to do your reasoning tasks across that corpus also takes strong hardware to get production ready response times.

No, openai isn't using inferior hardware, but their model goals, token chunking strategies and overall corpus are generalist in nature.

There are then processing strategies teams are using to go beyond the "memory" limitations gpt 4 has, that provide massive benefits to coherency, essentially anti hallucination and better overall reasoning