this post was submitted on 23 May 2024
79 points (94.4% liked)

Technology

59436 readers
2970 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] autotldr@lemmings.world 2 points 5 months ago

This is the best summary I could come up with:


Until now, Nvidia’s produced a new architecture roughly once every two years — revealing Ampere in 2020, Hopper in 2022, and Blackwell in 2024, for example.

(The industry darling H100 AI chip was Hopper, and the B200 is Blackwell, though those same architectures are used in gaming and creator GPUs as well.)

Huang says Nvidia will accelerate every other kind of chip it makes to match that cadence, too.

“New CPUs, new GPUs, new networking NICs, new switches... a mountain of chips are coming,” he says.

Huang also shared a couple of his sales pitches on the call by way of explaining the incredible demand for Nvidia’s AI GPUs:

Nvidia’s CFO interestingly says that automotive will be its “largest enterprise vertical within data center this year,” pointing to how Tesla purchased 35,000 H100 GPUs to train its “full-self driving” system, while “consumer internet companies” like Meta will continue to be a “strong growth vertical,” too.


The original article contains 377 words, the summary contains 155 words. Saved 59%. I'm a bot and I'm open source!