Every CES has its buzzword, and this year’s addition to the world was “physical AI” — a term that, twelve months from now, might well define how we remember 2026.
The buzz manifested materially, as may be expected of physical AI. Humanoid robots waved, danced and poured drinks with varying degrees of grace. Autonomous vehicles glided through carefully choreographed demos while their makers promised a future where the steering wheel becomes as quaint as the rotary phone.
In truth, the term “physical AI” remains loosely defined. It’s essentially shorthand for the convergence of robotics, autonomous systems, simulation software and edge computing. It describes efforts to extend artificial intelligence beyond screens and servers into machines that perceive, reason and act in the physical world.
But how much of this is solid ground, and how much is smoke from the demo floor? The answer involves both genuine technical progress and the realities of regulation, cost and deployment timelines.
The physical AI market
The physical AI market reached approximately $5 billion in 2025, and analysts project it will expand to between $68 billion and $84 billion by 2034–35, with compound annual growth rates ranging from 31% to 34%. These projections reflect genuine momentum: the operational stock of industrial robots reached 4.7 million units in 2025, marking a 9% year-over-year increase.
According to Barclays’ “AI Gets Physical” report, the humanoid market, currently valued at between $2 billion and $3 billion, could surge to $40 billion by 2035 — or even $200 billion in optimistic scenarios.
This growth is driven by three converging demographic forces identified in the Barclays report: aging populations (the share of adults 65 and older rising from 10% today to 16% by 2050), rapid urbanization (nearly 70% living in cities by midcentury) and evolving work preferences that see younger generations avoiding repetitive or physically demanding jobs regardless of pay.
The result is a growing labor mismatch. Between 2010 and 2024, the agricultural labor force declined by 37% in Europe, 23% in Japan, and 17% in the US, according to the report. In healthcare, maintaining today’s ratio of nursing staff to seniors would require personnel growth of 40% in the US and 30% in Europe by 2050.
Humanoid robots are marketed to bridge this gap because those developing them say that the world is built for humans, and their human shape lets them operate in existing environments without costly redesigns.
What has changed is both intelligence and economics. Barclays identifies “the three Bs” — brains (cognitive AI and compute), brawn (actuators and mechanical systems) and batteries — as the core technologies driving a dramatic 30-fold cost reduction over the past decade, from approximately $3 million to around $100,000 per unit. The brawn component accounts for roughly 50% of production costs, compared with 35% for brains and 15% for batteries.
Nvidia’s vision
Jensen Huang, CEO of Nvidia, announced a broad set of models, frameworks and hardware platforms at CES 2026 aimed at the full lifecycle of robot development, from training and simulation to deployment in factories, hospitals and vehicles.
Across the CES show floor, robots played ping pong, folded clothes and conversed with attendees, while autonomous vehicles demonstrated capabilities from farm fields to highways.
“The ChatGPT moment for robotics is here,” Huang said. “Breakthroughs in physical AI are unlocking entirely new applications.”
Nvidia’s announcements centered on “generalist-specialist” robots — machines that can learn multiple tasks with greater autonomy. These include new open models on Hugging Face: Cosmos Transfer 2.5 and Cosmos Predict 2.5 for synthetic data and simulation, Cosmos Reason 2 for vision-language perception and Isaac GR00T N1.6 for humanoid full-body control.
Evolving adoption
More companies are past the prototypes: Franka Robotics, NEURA Robotics and Humanoid are testing Nvidia’s GR00T-enabled workflows, while Salesforce applies the technology to video analysis for incident management, claiming to have cut resolution times in half.
In 2025, BMW’s Spartanburg plant deployed Figure humanoid robots on its assembly line, loading sheet-metal parts and helping roll out 30,000 vehicles. In November 2025, UBTECH began mass production of its Walker S2 humanoids, targeting 500 units in 2025 and scaling to 10,000 annually by 2027.
In healthcare, surgeons performed 2.68 million procedures using da Vinci robotic systems in 2024, with 1,526 new systems installed. At Yale New Haven Health, robotic surgery patients averaged just 1.5 days in hospital stays, compared with six days for open surgeries.
The broader market also reflects this: according to the International Federation of Robotics, industrial robot installations have reached a global value of $16.7 billion. Barclays reports that 21 new humanoid models were introduced in 2025, compared with three in 2022, and mentions of “humanoid” in news and reports grew nearly tenfold since January 2020.
From robots to the road
Humanoids weren’t the only machines learning to navigate the physical world. Autonomous vehicles had their own moment at CES — and their own Nvidia boost.
Huang revealed that Nvidia has developed software supplying the intelligence layer for autonomy that robotaxi companies can purchase and build upon. In partnership with Mercedes-Benz, the companies expect their first autonomous vehicle on US roads in the first quarter of 2026.
But autonomy isn’t just about the car seeing the road. It’s about the car understanding what you want from it. Michael Harrell, senior vice president of engineering for maps at TomTom, described emerging multiagent AI architectures: a primary voice assistant like Amazon Alexa or SoundHound orchestrating specialized subagents.
“The user speaks to what feels like a single assistant, but behind the scenes, the main agent communicates with our navigation subagent,” Harrell explained. “The user never knows there’s a subagent involved; it just feels like one highly capable assistant with deep navigation understanding.”
For Harrell, that intelligence still needs an anchor. Perception systems, he noted, are based on models that can hallucinate — generating plausible but incorrect outputs. “Maps ground these systems in reality,” he said.
A reality check
Grounding isn’t just a technical problem — it’s a regulatory one. While vendors stress rapid advances, deployment in regulated, safety-critical environments remains constrained by certification, cost and liability.
Thomas Cardon, director of EMEA automotive sales at QNX, knows this terrain well. His company’s software supports safety-critical systems in most modern vehicles, and he offered a measured counterpoint to the conference’s optimism.
“You have to remember that the automotive industry is highly regulated, both from a safety and a security standpoint,” he said.
Cardon drew a distinction that often gets lost in the buzz: there’s AI used to build products, and AI embedded in products. The former is already happening — automation around coding, testing and validation. The latter? Still years away.
“From our current discussions with OEMs, we are looking at start-of-production timelines around 2030 to 2032,” he said. “And for those programs, AI deeply embedded in the vehicle is not part of the plan.”
Physical AI’s investment outlook
So where does the money think this is headed? The Barclays report identifies three investment opportunities worth watching. First, scaling humanoid production creates tailwinds for industrials — actuator manufacturers and automation leaders who largely missed the first AI wave.
Second, defense technology represents major spillover, as many physical AI components are dual-use. With global defense spending projected to reach $6.6 trillion by 2035, this convergence should deepen.
And third, physical AI’s dependence on critical minerals creates its own market dynamics, given that over 90% of magnetic rare earths come from China.
How this manifests into the show floor at CES shows that humanoids are no longer clumsy curiosities – they’re making their pitch into our everyday lives with ever-evolving skills that the refining software is providing.