Physical AI and Autonomous Systems Governance Challenges Explained

Governance around Physical AI is becoming increasingly challenging as autonomous AI systems are integrated into robots, sensors, and industrial equipment. The critical question extends beyond whether AI agents can complete tasks—it encompasses how their actions are tested, monitored, and halted when interacting with real-world systems.
📊 Industrial Robotics Growth
Industrial robotics provides a substantial foundation for this discussion. The International Federation of Robotics reported that 542,000 industrial robots were installed worldwide in 2024—more than double the annual level recorded a decade earlier. Projections indicate installations will reach 575,000 units in 2025 and surpass 700,000 units by 2028.
Market researchers are applying the Physical AI label to an expanding range of systems, including robotics, edge computing, and autonomous machines. Grand View Research estimated the global Physical AI market at US$81.64 billion in 2025, projecting growth to US$960.38 billion by 2033, though categorization depends on how vendors define intelligence in physical systems.
🔄 From Model Output to Physical Action
The governance challenge differs fundamentally from software-only automation because physical systems operate around workplaces, infrastructure, and human users. They connect to equipment requiring clear safety limits. A model output can translate into robot movement, machine instruction, or sensor-based decision—making safety limits and escalation paths integral to system design.
🤖 Google DeepMind's Robotics Innovation
Google DeepMind's robotics work exemplifies how AI models are being adapted for this environment. The company introduced Gemini Robotics and Gemini Robotics-ER in March 2025, describing them as models built on Gemini 2.0 for robotics and embodied AI. Gemini Robotics functions as a vision-language-action model designed to control robots directly, while Gemini Robotics-ER focuses on embodied reasoning, including spatial understanding and task planning.
A robot utilizing this model type must identify objects, understand instructions, and plan movement sequences. It must also assess task completion accuracy—creating a control problem encompassing both model behavior and mechanical system limits.
Key Requirements for Useful Robots:
- Generality – Handling unfamiliar objects and environments
- Interactivity – Responding to human input and changing conditions
- Dexterity – Executing physical tasks requiring precise movement
In launch materials, Google DeepMind stated that Gemini Robotics could follow natural-language instructions and perform multi-step manipulation tasks. Examples included folding paper, packing items into bags, and handling objects not seen during training.
⚙️ Technical Requirements Beyond Language Understanding
Physical AI technical requirements extend beyond language understanding to include visual perception, spatial reasoning, task planning, and success detection. In robotics, success detection is critical because systems must determine whether tasks are completed, require retry attempts, or should be stopped.
Google DeepMind's Gemini Robotics-ER 1.6, introduced in April 2026, demonstrates how these functions are being packaged in newer models. The company describes the model as supporting spatial logic, task planning, and success detection, with capabilities to reason through intermediate steps and decide whether to proceed or retry.
Google's developer documentation indicates Gemini Robotics-ER 1.6 is available in preview through the Gemini API. It's described as a vision-language model bringing Gemini's agentic capabilities to robotics, including visual interpretation, spatial reasoning, and planning from natural-language commands.
🛡️ Safety Controls in System Design
Governance complexity increases when systems can call tools, generate code, or trigger actions. Controls must define:
- What data the system can access
- Which tools it can use
- Which actions require human approval
- How activity is logged for review
McKinsey's 2026 AI trust research highlights this issue in enterprise AI more broadly, finding that only approximately one-third of organizations reported maturity levels of three or higher in strategy, governance, and agentic AI governance, even as AI systems assume more autonomous functions.
In robotics, safety encompasses the physical behavior of machines. Google DeepMind describes robot safety as a layered problem, covering:
🔹 Lower-level controls: Collision avoidance, force limits, stability
🔹 Higher-level reasoning: Contextual safety assessment of requested actions
The company introduced ASIMOV, a dataset for evaluating semantic safety in robotics and embodied AI, designed to test whether systems can understand safety-related instructions and avoid unsafe behavior in physical settings.
📋 Governance Frameworks and Standards
Controls used for software agents become more challenging to manage when systems connect to robots, sensors, or industrial equipment. These include access rights, audit trails, refusal behavior, escalation paths, and testing.
Governance frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001 provide structures for managing AI risks and responsibilities across system lifecycles. In Physical AI, these controls must account for model behavior, connected machines, and operating environments.
🤝 Industry Partnerships and Collaborations
Google DeepMind has partnered with robotics companies as part of its embodied AI development. In March 2025, the company announced partnerships with Apptronik on humanoid robots using Gemini 2.0, listing Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools among trusted testers for Gemini Robotics-ER.
The 2026 update referenced work with Boston Dynamics involving robotics tasks such as instrument reading—use cases depending on visual understanding, task planning, and reliable assessment of physical conditions.
🏭 Real-World Applications
Physical AI applies to industrial inspection, manufacturing, logistics, facilities, and warehouses. These settings require systems to interpret real-world conditions and act within defined limits. The governance question centers on how those limits are established before autonomous systems are permitted to make or execute decisions.
📅 Upcoming Event: Google DeepMind and Google AI Studio are listed as hackathon technology partners for AI & Big Data Expo North America 2026, taking place on May 18–19 at the San Jose McEnery Convention Center.
(Photo by Mitchell Luo)
See also: AI agent governance takes focus as regulators flag control gaps
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.


Log in









