Before Jensen Huang’s keynote at NVIDIA GTC in San Jose, our team joked about whether he would introduce yet another new buzzword, like he did last year with “Physical AI.” This time, he didn’t. The term stayed, which in itself felt like a signal.
Physical AI is no longer something that needs rebranding. It is becoming the way this space is understood.
In simple terms, Physical AI refers to systems that operate in the real world. Not just interpreting data, but perceiving environments, making decisions and acting within physical constraints. Robotics is the most visible example, but the same idea applies to infrastructure, mobility and industrial environments. At GTC, that shift was visible everywhere: robots, self-driving cars and even trucks, intelligent, self-learning environments.
From demos to operative systems
What stood out this year was not a single breakthrough, but a change in focus. The conversation is moving away from isolated demos toward full systems. The question is no longer what a model can do in a controlled setting, but how it is trained, validated, deployed and improved over time in real environments.
There is clear progress. At the same time, it is easy to get carried away when looking at GTC use cases and comfortably riding Waymos to the Airbnb and back. Everything seems to move in a straight line. Outside of it, most companies are still figuring out where this fits and what is worth investing in. Not every use case will scale, and not every player will succeed.

Simulation as a starting point
One of the clearest shifts is the role of simulation. Companies like Hyundai and Foxconn are building development around precise digital representations of the physical world. Geometry matters more than ever, and systems are tested in simulation before touching reality. This changes how things are built. Iteration becomes faster and risks can be explored earlier.
At the same time, simulation is not the full answer. The last stretch still depends on real-world data. Edge cases, unexpected behavior and environmental variability are difficult to model. The world is still messy.
Acceleration is real, but uneven
Walking the exhibition floor told the same story from another angle. We tested robots, haptic gloves and different approaches to hand tracking. Some felt close to usable, others still far from it. The direction is clear, but maturity varies.

The pace of learning is clearly increasing. Advances in training, especially around video and multimodal data, are pushing things forward quickly. Listening to NVIDIA’s Jim Fan made that very concrete.
Standardization efforts like OpenUSD - that we’ve been using for years - and SimReady are starting to address this by creating shared foundations. This kind of infrastructure is less visible, but it is critical for scaling.
Starting with the spatial context
Looking ahead, the challenge is less about new concepts and more about execution. Moving from controlled environments into real operations is still hard, and integrating into existing processes is often harder.
From our perspective, working with both robotics and smart city cases, the most practical path forward is incremental. Instead of aiming straight for full autonomy, companies benefit from building capabilities step by step. Structuring data, creating initial digital twins, and validating specific use cases tends to deliver real value faster.
In many cases, the first bottleneck is not the model or the robot, but the data. 3D assets are scattered, digital twins are incomplete, and simulation setups live in separate silos.
So the question becomes quite simple: how’s your data?
If your 3D data and digital twins are fragmented, moving forward with Physical AI becomes unnecessarily hard. Getting that foundation in place is often the most important first step and something we’re happy to help with.
AI-driven Robotics session in Helsinki
We will continue this discussion at our Physical AI event in Helsinki on April 23rd, focusing on what this shift looks like in practice. Companies like Konecranes, NVIDIA, Google and Dell will share their perspectives.
› More Details and Registration
About the author
Laura Olin
Leveraging her deep expertise in leadership and organizational strategies, Laura keeps Younite’s AI and digital transformation initiatives on track. Her role is multi-dimensional, focusing on aligning the company’s strategic goals and ensuring that operations run smoothly.

