this post was submitted on 19 Jan 2024
253 points (95.3% liked)
Technology
59080 readers
4344 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Would that be diminishing returns on quality, or training speed?
If I could tweak a model and test it in an hour vs 4 hours, that could really speed up development time?
Quality. Yeah, using the extra compute to increase speed of development iterations would be a benefit. They could train a bunch of models in parallel and either pick the best model to use or use them all as an ensemble or something.
My guess is that the main reason for all the GPUs is they're going to offer hosting and training infrastructure for everyone. That would align with the strategy of releasing models as "open" then trying to entice people into their cloud ecosystem. Or, maybe they really are trying to achieve AGI as they state in the article. I don't really know of any ML architectures that would allow for AGI though (besides the theoretical, incomputable AIXI).