Nvidia CEO Jensen Huang Introduces Next-Generation Rubin AI Chips at GTC 2025

Nvidia founder Jensen Huang opened the company’s AI developer conference on Tuesday, addressing thousands of attendees with a bold statement: artificial intelligence is at an “inflection point.”
At GTC 2025—hailed as the “Super Bowl of AI”—Huang’s keynote focused on Nvidia’s latest advancements in AI and his vision for the industry’s trajectory in the coming years. He highlighted the surging demand for GPUs from major cloud service providers and projected that Nvidia’s data center infrastructure revenue could reach $1 trillion by 2028.
Huang’s much-anticipated presentation unveiled Nvidia’s upcoming graphics architectures: Blackwell Ultra and Vera Rubin, the latter named after the renowned astronomer. Blackwell Ultra is set to launch in the second half of 2025, followed by the Rubin AI chip in late 2026, with Rubin Ultra expected in 2027.
During his two-hour keynote, Huang discussed AI’s rapid evolution over the past decade—from perception and computer vision to generative AI and now agentic AI, which he described as AI capable of reasoning.
“AI understands context, it understands what we’re asking, and it comprehends the meaning of our requests,” Huang said. “It now generates answers, fundamentally transforming computing.”
Huang emphasized that the next major leap in AI is already underway: robotics. He introduced the concept of “physical AI,” which enables robots to grasp real-world concepts such as friction, inertia, cause and effect, and object permanence.
“Each wave of AI innovation unlocks new market opportunities for all of us,” Huang noted.
A key aspect of this progress is the use of synthetic data generation—AI-created data—to train models. AI requires digital experiences to learn, and its learning speed has made human-driven training methods increasingly inefficient.
“There’s a limit to the amount of data and human demonstrations available,” Huang said. “One of the biggest breakthroughs in recent years has been reinforcement learning.”
Nvidia’s technology, he explained, is designed to facilitate this type of AI learning, where the system continuously refines its approach as it attempts to solve complex problems.
To support the development of humanoid robots, Huang introduced Isaac GR00T N1, an open-source foundation model designed for robotics training. The model will be integrated with an updated Cosmos AI system to generate simulated training data.
Benjamin Lee, a professor of electrical and systems engineering at the University of Pennsylvania, emphasized the challenges of training robots due to the time and costs associated with real-world data collection. He pointed out that simulated environments have long been used in reinforcement learning, allowing researchers to test models more efficiently.
“I think it’s really exciting. Providing a platform, and an open-source one, will allow more people to engage with reinforcement learning,” Lee said. “More researchers will have access to synthetic data—not just major industry players, but also academic institutions.”
Huang also introduced Newton, an open-source physics engine for robotics simulation, developed in collaboration with Google DeepMind and Disney Research.
As he concluded his keynote, Huang was joined onstage by a small, boxy robot named Blue, which emerged from a hatch in the floor. The robot beeped at Huang and followed his commands, standing beside him as he wrapped up his presentation.
“The age of generalist robotics is here,” Huang declared.