![Nvidia](https://static.seekingalpha.com/cdn/s3/uploads/getty_images/1988609819/image_1988609819.jpg?io=getty-c-w750)
Justin Sullivan
Nvidia (NASDAQ:NVDA) made several announcements at its highly-anticipated GTC developer conference on Monday, including pushing the world into accelerated computing and the new GB200 GPU.
“Accelerated computing has reached the tipping point,” Nvidia Chief Executive Jensen Huang said at the annual confab. “General purpose computing has run out of steam.”
He added there is a dramatic speed up in accelerated computing over general purpose computing and the impact is “dramatic” in all industries, but especially in the tech industry to help create products.
Huang also announced several new partners, including Ansys (ANSS), Cadence Design Systems (CDNS) and Synopsys (SNPS), bringing CUDA to help accelerate Synopsys.
Huang also said that Taiwan Semiconductor (TSM) is going into production with cuLitho to increase the manufacturing of the next generation of advanced processors.
“We’re gonna have to build even bigger GPUs. Hopper is fantastic, but we need bigger GPUs,” Huang said, introducing the new Blackwell GPU platform, named after American mathematician David Blackwell.
It has 208B transistors and contains two dies on package, offering full cache coherency. It also has 192GB of high bandwidth memory, or HBM3E, at 8 Gbps and a 1.8TB/second NVLink bandwidth per chip.
The new chip platform also has 208B transistors and contains two dies on package, offering full cache coherency. It also has 192GB of high bandwidth memory, or HBM3E, at 8 Gbps and a 1.8TB/second NVLink bandwidth per chip.
The Blackwell platform is available both an accelerator, the GB200 (which can go into existing H100 or H200 systems) and as a super-chip.
Huang also noted the Blackwell GPU is more powerful and faster than Hopper, with 2.5 times the 8-bit floating point power of Hopper. It also has five times the fp4, six times the high bandwidth memory model size and five times the high bandwidth.
It would take 2,000 new GB200 GPUs to train GPT-MoE-18.T in 90 days, with just four megawatts of power, compared to 8,000 GH100 GPUs using 15 megawatts of power, Huang said.
Nvidia said Amazon (AMZN) Web Services, Google (GOOG) (GOOGL) Cloud, Microsoft (MSFT) Azure and Oracle (ORCL) Cloud Infrastructure will be among the first cloud service providers to offer Blackwell-powered instances, as will NVIDIA Cloud Partner program companies Applied Digital, CoreWeave, Crusoe, IBM (IBM) Cloud and Lambda.
“Blackwell is just going to be an amazing system for generative AI,” Huang said, adding that data centers will be known as AI factories.
“Blackwell will be the most successful product launch in our history,” Huang added, noting sovereign nations, cloud service providers, telecom companies and more have signed up for the new platform.
Also announced at the annual event are the new X800 series networking switches, designed for massive-scale AI and Nvidia’s next AI supercomputer, the NVIDIA DGX SuperPOD, which uses GB200 Grace Blackwell Superchips.
Nvidia also unveiled its new NVLink chip. It has 50B transistors and is capable of connecting all GPUs to behave as one giant GPU, Huang said.
Other announcements include an Earth digital twin focused on climate science; additional services to aid drug, medtech and digital health discovery; and expanded partnerships with Microsoft Azure and AWS to expand the use of generative AI.
This story is breaking news…