ASTM Calls for Urgent Safety Standards for Humanoid Robots
Expert warns existing industrial rules overlook dynamic stability, psychosocial impact, and public interaction risks of humanoids

As humanoid robots move from factory floors to public spaces and eventually homes, a leading robotics standards authority has called for urgent development of new safety frameworks.
The appeal comes amid mounting concern that decades-old industrial robotics regulations fail to address the distinct hazards posed by human-like machines.
“Humanoids are something completely new for robotics, especially when it comes to standards. The human-centric design raises our expectation levels,” said Aaron Prather, Director of Robotics & Autonomous Systems Programs at ASTM International.
He said public users are likely to react more emotionally to humanoid failures than to mechanical mishaps. This change in perception demands a fresh approach to safety, interaction protocols, and risk management.
Prather emphasized that humanoids are expected to operate alongside untrained members of the public, rather than just trained industrial workers.
This shift, he argued, means standards must account for shared-space scenarios—such as robots cleaning transport hubs—where bystanders may not engage directly with the machines but could still be at risk.
He said that placing humanoids into real-world environments inevitably carries risks that existing standards have not yet addressed.
He outlined six priority risk areas identified by ASTM’s study group:
Physical safety: The most immediate concern is tip-overs, which could injure nearby people or damage surroundings. Many humanoids withstand hard impacts yet collapse under gentle pushes.
Psychosocial impact: Humans may over-trust or quickly grow frustrated with humanoids, affecting workplace morale and emotional well-being.
Ergonomics: Robots must handle reaching and balancing like humans, correcting missteps to prevent falls or strain-related failures.
Privacy and ethics: Humanoids will collect large amounts of sensor data in public and private spaces, raising questions about surveillance and consent.
Cybersecurity: As networked devices, humanoids risk hacking or remote takeover, which could lead to safety incidents or data breaches.
Reliability: Standards must define expected uptime and failure modes, ensuring predictable behavior even when errors occur.
Prather also warned of psychosocial hazards. He said people might develop excessive trust in humanoids or become easily frustrated with them, undermining workplace morale. He said workers may react differently to humanoids than to robotic arms, as the human-like form factor carries emotional and cultural baggage.
He added that ergonomics would also be critical, as humanoids must be able to correct overextension and maintain balance much as humans do.
Building a Classification Framework
Speaking at the 2025 Humanoids Summit in London, Prather revealed that ASTM and fellow standards development organizations (SDOs) are now building a multi-axis classification framework for humanoids. The framework would categorize robots by physical capabilities, behavioral intelligence, operational context, stability profile, and level of human contact.
ASTM International, formerly the American Society for Testing and Materials, is a 126-year-old standards body headquartered in Pennsylvania. It publishes over 12,500 voluntary consensus standards worldwide, guiding the safety, quality, and performance of products across industries. In robotics, ASTM works alongside global partners through initiatives such as its National Institute of Standards and Technology (NIST)-funded Standardization Center of Excellence.
He argued that this system would help regulators distinguish, for instance, between industrial humanoids confined to factories and entertainment robots designed for close interaction with children. He said a classification framework would clarify which safety thresholds apply to which designs, noting that a small entertainment robot operating in theme parks would likely require stricter safeguards than an industrial humanoid operating in controlled environments.
Prather added that current terminology may also need updating, as the term “humanoid” could be less useful than a focus on “actively controlled stability.”
When power is cut, humanoids often lose balance entirely, introducing novel safety hazards absent from wheeled robots. Addressing this, the International Organization for Standardization (ISO) has just approved its first related standard, ISO 25785, covering safety requirements for industrial mobile robots with actively controlled stability.
He stressed that stability metrics will be essential. ASTM proposes a dual approach—first, establishing performance-based metrics to measure how much force it takes to topple a robot, and second, developing behavior-based safety standards to guide how robots respond to instability.
“Some of these robots will tip over if you just slowly push them,” he said. “By the time they start falling, it’s too late—they can’t recover.”
Collaborative Efforts Across Borders
Prather emphasized that setting global standards must be a collaborative effort rather than a fragmented one. ASTM is working with ISO, NIST, the Institute of Electrical and Electronics Engineers (IEEE), and potentially the Acceptable Means of Compliance (AMC) framework, and the British Standards Institution (BSI), to align safety benchmarks and prevent duplication.
“The number one sin in the standards world is duplication of effort. We are trying to break down the silos between organizations and between countries,” he said.
He rejected the argument that strict rules could stifle innovation, stressing that they create a baseline from which innovation can grow. He cited Boston Dynamics and Agility Robotics as firms that gained a competitive advantage by engaging early in standards-setting.
He added that siloed, company-specific safety frameworks would erode public trust by confusing regulators and complicating compliance. To counter this, ASTM has begun hosting cross-organization meetings twice a year to coordinate approaches and accelerate adoption.
Towards Home-Use Guidelines
Looking ahead, Prather forecasts that standards for home-use humanoids are still several years away. He explained that U.S. consumer safety regulators are only beginning to examine AI-enabled devices for domestic environments, and their eventual rules are likely to draw heavily on ASTM’s work.
“When it comes to the home, I would definitely say probably five years until standards are done,” he said.
He noted that consumer regulators have sweeping powers, including the authority to recall unsafe robots from homes.
“That’s the power they have. They just recalled 6,000 robots in the U.S. last year for the defect of bursting into flames,” he remarked, underscoring why robust safety baselines must be set before humanoids enter households.
Prather urged researchers, manufacturers, and policymakers to contribute to the drafting process, stressing that robust data is essential to produce meaningful safety metrics.
“Good research leads to good standards,” he concluded. “We want the marketplace to trust what we’re putting out there—that these robots are safe and they’re going to deliver on what we are promising.”