GTC 2025: Nvidia Announces Next-Generation AI ‘Superchips’GTC 2025: Nvidia Announces Next-Generation AI ‘Superchips’
Nvidia CEO Jensen Huang outlines the company’s AI vision, unveiling new supercomputers and software to power next-gen workloads. Analysts weigh in on what it means for the industry.

With demand for data center hardware surging, Nvidia CEO Jensen Huang has announced plans to release a new Blackwell Ultra AI chip later this year, followed by its next-generation Vera Rubin processors in 2026 and 2027.
During a keynote speech on Tuesday (March 18) to kick off the Nvidia GTC 2025 conference, Huang said the new Blackwell Ultra “super chip” will be available in the second half of 2025. It will offer 1.5 times better FP4 inferencing performance, 1.5 times more memory and twice the bandwidth than the current Blackwell GB200 chip.
The next-generation Vera Rubin superchip, named after the American astronomer who discovered dark matter, will debut in the second half of 2026 and will perform 3.3 times faster than a system running Blackwell Ultra. It will feature 88 custom Arm-based CPUs named Vera and two GPUs named Rubin, Nvidia said.
In the second half of 2027, Nvidia plans to ship Rubin Ultra, which will perform 14 times faster than a Blackwell Ultra system. Rubin Ultra will also have 88 Vera CPUs, but it will double the Rubin GPU count to four. Each rack powered by Rubin Ultra will pack 15 exaflops of power, compared to one exaflop in current systems, Huang told a packed crowd at the SAP Center in San Jose, California.
“Vera Rubin is incredible because the CPU is new,” Huang said. “It’s twice the performance of Grace, and more memory, more bandwidth, and yet, just a little tiny 50-watt CPU. “Rubin [is] a brand new GPU. Everything is brand new except for the chassis.”
GTC Product Showcase
The next-generation AI chips were among a barrage of Nvidia announcements on Tuesday, including new hardware and software for data center operators and enterprises to build or use so-called AI factories – specialized data centers designed for AI workloads.
Among the new releases were the new DGX SuperPOD AI supercomputers that will run the new Blackwell Ultra chips. Nvidia also announced Mission Control software, which will allow enterprises to automate the management and operations of their Blackwell-based DGX systems.
In addition, the company showcased Nvidia Dynamo, software designed to accelerate AI inferencing, and new optics network switches that will speed network performance and reduce energy consumption in AI factories.

Nvidia’s DGX SuperPOD AI supercomputer. (Image: Nvidia)
Analyst Matt Kimball of Moor Insights and Strategy said Nvidia is trying to become a one-stop AI shop for the enterprise by tightly integrating and optimizing hardware, software and other tools. The boost in performance in both Blackwell Ultra and Vera Rubin will be much needed by enterprises adopting AI, he said.
“As the potential of AI begins to be realized in the enterprise, the compute power required to support these environments will be considerably more than what we will see in many of today’s deployments,” Kimball told Data Center Knowledge.
“Blackwell Ultra, in particular, is an incredible cornerstone to the AI factory. With its performance gains over Blackwell, and what Jensen Huang claimed as 50 times the data center revenue opportunity, I have to expect to see an accelerated adoption of this architecture.”
Huang Discusses the Next AI Wave After GenAI
During his keynote, Huang said Blackwell GPUs are in full production and have seen huge adoption, particularly among the four largest cloud service providers: Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud.
After the next-generation Rubin chips are released, the company’s next-generation chips in 2028 will be called Feynman chips, named after physicist Richard Feynman, he said.
He also discussed the next two waves of AI. The industry started with “perception AI,” such as computer vision and speech recognition. In the last five years, the industry has focused on generative AI, which has fundamentally changed the computing landscape, the Nvidia CEO said.
The next wave is “agentic AI,” which autonomously solves complex problems such as AI agents for customer service or coding assistance. The foundation of agentic AI is reasoning, Huang said.
“We now have AIs that can reason, which is fundamentally about breaking a problem down step by step. Maybe it approaches a problem in a few different ways and selects the best answer,” he told GTC delegates.
Looking further toward the tech horizon, Huang said the next wave after agentic AI is physical AI, such as robotics and self-driving cars, which are already being adopted.
However, due to AI’s advances, the computation needed for inferencing is dramatically higher than it used to be.
“The amount of computation we need at this point as a result of agentic AI, as a result of reasoning, is easily 100 times more than we thought we needed this time last year,” Huang said.
The solution, of course, is building Nvidia-powered AI factories. Huang said Nvidia Dynamo AI inferencing software will serve as the operating system for AI factories.
The open source Dynamo software, used to accelerate and scale AI reasoning, will take data center infrastructure and optimize it for performance, resulting in 30 times better performance, said Ian Buck, vice president Nvidia’s hyperscale and HPC, during a media briefing.
Analysts Take on Nvidia’s GTC Announcements
Nick Patience, vice president and AI practice lead at The Futurum Group, said Nvidia’s latest announcements were evolutionary rather than revolutionary, but still marked an important development in the company’s vision.
“Nvidia is building up its software stack to protect against greater silicon competition,” Patience told Data Center Knowledge.
The Dynamo software announcement is significant because it meets an emerging need as more organizations adopt AI reasoning models that require accelerated inferencing, Patience said.
Another key announcement is a family of open Llama Nemotron models with reasoning capabilities, which will make it easier for enterprises to adopt agentic AI, the analyst said. At the same time, Nvidia Mission Control software is important for enterprises in managing their GPU clusters.
Read more of the latest data center hardware news
Kimball noted that deploying, provisioning, tuning, and optimizing the performance of a GPU cluster has historically been extremely resource-intensive for CIOs. Mission Control, which automates management, simplifies the task and allows enterprises to maximize the performance of their AI factories.
“The benefit to a CIO is very obvious: exponentially better performance at a lower cost and a lower power footprint,” he said.
Echoing Patience’s notion of evolution over revolution, IDC analyst Ashish Nadkarni said the announcements indicated that Nvidia is entering a more stable growth phase.
“I think Nvidia as a company is maturing now. It’s like how Apple’s keynotes have become more predictable. It’s the same with Nvidia,” Nadkarni said.
“They are building the next-generation computing environments, and along with that, they’re building their software stacks and building an ecosystem.”
Read more about:
Chip WatchAbout the Author
You May Also Like