Monday, August 11, 2025
22.8 C
London

NVIDIA Research Shapes Physical AI

Physical AI — the engine behind modern robotics, self-driving cars and smart spaces — relies on a mix of neural graphics, synthetic data generation, physics-based simulation, reinforcement learning and AI reasoning. It’s a combination well-suited to the collective expertise of NVIDIA Research, a global team that for nearly 20 years has advanced the now-converging fields of AI and graphics.

That’s why at SIGGRAPH, the premier computer graphics conference taking place in Vancouver through Thursday, Aug. 14, NVIDIA Research leaders will deliver a special address highlighting the graphics and simulation innovations enabling physical and spatial AI.

“AI is advancing our simulation capabilities, and our simulation capabilities are advancing AI systems,” said Sanja Fidler, vice president of AI research at NVIDIA. “There’s an authentic and powerful coupling between the two fields, and it’s a combination that few have.”

At SIGGRAPH, NVIDIA is unveiling new software libraries for physical AI — including NVIDIA Omniverse NuRec 3D Gaussian splatting libraries for large-scale world reconstruction, updates to the NVIDIA Metropolis platform for vision AI as well as NVIDIA Cosmos and NVIDIA Nemotron reasoning models. Cosmos Reason is a new reasoning vision language model for physical AI that enables robots and vision AI agents to reason like humans using prior knowledge, physics understanding and common sense.

Many of these innovations are rooted in breakthroughs by the company’s global research team, which is presenting over a dozen papers at the show on advancements in neural rendering, real-time path tracing, synthetic data generation and reinforcement learning — capabilities that will feed the next generation of physical AI tools.

How Physical AI Unites Graphics, AI and Robotics

Physical AI development starts with the construction of high-fidelity, physically accurate 3D environments. Without these lifelike virtual environments, developers can’t train advanced physical AI systems such as humanoid robots in simulation, because the skills the robots would learn in virtual training wouldn’t translate well enough to the real world.

Picture an agricultural robot using the exact amount of pressure to pick peaches off trees without bruising them, or a manufacturing robot assembling microscopic electronic components on a machine where every millimeter matters.

“Physical AI needs a virtual environment that feels real, a parallel universe where the robots can safely learn through trial and error,” said Ming-Yu Liu, vice president of research at NVIDIA. “To build this virtual world, we need real-time rendering, computer vision, physical motion simulation, 2D and 3D generative AI, as well as AI reasoning. These are the things that NVIDIA Research has spent nearly two decades to be good at.”

NVIDIA’s legacy of breakthrough research in ray tracing and real-time computer graphics, dating back to the research organization’s inception in 2006, plays a critical role in enabling the realism that physical AI simulations demand. Much of that rendering work, too, is powered by AI models — a field known as neural rendering.

“Our core rendering research fuels the creation of true-to-reality virtual words used to train advanced physical AI systems, while AI is in turn helping us create those 3D worlds from images,” said Aaron Lefohn, vice president of graphics research and head of the ​​Real-Time Graphics Research group at NVIDIA. “We’re now at a point where we can take pictures and videos — an accessible form of media that anyone can capture — and rapidly reconstruct them into virtual 3D environments.”

Source link

Hot this week

Topics

spot_img

Related Articles

Popular Categories

spot_imgspot_img