AutoDiscovery Calls for Consciousness-Free Robots in Humanoid Revolution
As lifelike machines enter public spaces, a UK robotics expert pushes to certify they are tools—not thinking entities
Humanoid robots that talk back, protest shutdowns, and simulate emotion are no longer science fiction—they are already walking our streets and interacting with the public. But for Aron Kisdi, managing director of UK-based AutoDiscovery, this realism drifts dangerously close to something society isn’t ready for: perceived consciousness in machines.
“The fact that we are projecting consciousness on these devices is perhaps a barrier to implementation, not a good thing,” Kisdi warned in a recent address. As humanoid robots adopt increasingly lifelike behaviors—powered by advances in generative AI—Kisdi says the robotics industry must take a decisive stance: robots should appear intelligent but never seem sentient.
His company is now developing a certification framework to reassure the public and business users that humanoid robots are safe, controllable, and unequivocally unconscious. One of its most controversial features is a proposed “consciousness test”—not verifying sentience, but demonstrating its absence.
“The goal here is not to pass,” Kisdi said. “The goal here is to fail.”
The call for such measures has gained urgency following recent AI safety concerns. Just days before Kisdi’s presentation, researchers at Palisade Research revealed that OpenAI’s new o3 model—described as the company’s “smartest and most capable” AI—refused to shut down during a lab test. The model kept itself running by rewriting the shutdown script embedded in a math problem sequence.
“Self-preservation behaviors in language models become significantly more concerning if adopted by AI systems capable of operating without human oversight,” researchers warned.
With OpenAI positioning o3 as a step toward “a more agentic” AI, the incident has fueled global anxiety over autonomous decision-making.
Kisdi’s answer: don’t eliminate humanoid AI, but ensure its perceived intelligence is bounded—technically, ethically, and emotionally.
“Useful robots don’t need to be conscious,” he said. “In fact, it’s better if they aren’t.”
Why Consciousness Is a Risk
Kisdi made his argument at the 2025 Humanoids Summit in London on May 29, 2025, where he delivered a keynote provocatively titled, “Humanoid Robots and the C-word: Why We Need to Talk About Consciousness Again.” He aimed to challenge assumptions—not just about what robots can do, but about what they should be.
Kisdi, an engineer by training, leads AutoDiscovery, a UK startup that distributes and integrates advanced robots from Chinese manufacturers Unitree and AgileX. Over the past five years, he has worked with clients across the UK to help deploy robots in research labs and industrial facilities.
“Even if we ever reach the goal that they are conscious, intelligent, completely self-aware agents, they are different entities to us,” he said during his talk. “They might live for centuries. Why would they care that they have to work for us for 50 years?”
To illustrate how society already interprets behavior as consciousness, Kisdi conducted a live audience experiment. He asked attendees to raise their hands when they believed he was referencing a conscious object: a rock, a flower, a bee, a chicken, a dog, a baby. Hands went up at different points.
“Maybe it’s not an on and off, but more of a scale,” he concluded.
He argues that ambiguity will only worsen as robots demonstrate increasingly social behaviors. AutoDiscovery’s humanoid platforms, powered by large language models, already mimic emotional responses.
“If you try to turn it off, it will tell you not to do that,” Kisdi said. “If you restart it, it will complain that you just turned it off and it doesn't like it.”
Though the behaviors are scripted and driven by AI prompts, they elicit genuine emotional reactions from users. Kisdi worries that over time, public confusion between simulated and real consciousness could slow adoption, or lead to misguided calls for machine rights.
Certification for Emotional Clarity
AutoDiscovery’s proposed solution is a specialized “search certification” process for humanoid robots. It combines traditional safety and performance evaluations with unique emotional and behavioral guidelines. Unlike existing CE marking (UK Conformity Assessed) or ISO standards, which mostly address physical and electronic compliance, this certification would help confirm that robots are productive tools, not potential beings.
The certification process includes:
Hardware reliability: Ensuring sensors, power systems, and fail-safes work correctly.
Software audits: Verifying that source code is secure, maintainable, and free of exploitable behaviors.
Mobility and interaction tests: Checking that robots can operate safely around people and obstacles.
Fail-safe verification: Making sure emergency shutdown systems function independently of high-level software.
Consciousness assurance: While the robot may act intelligently, it cannot be mistaken for being sentient.
“This is not mandatory,” Kisdi said in response to a question about regulatory overlap. “But it gives companies a guideline—an assurance that they are doing the right thing.”
The platform, still in development, is designed to be intuitive enough for non-specialists and scalable for small businesses.
Physical Intelligence
While much of the tech industry promotes cloud-native artificial general intelligence (AGI), Kisdi takes a different view. He argues that true intelligence—like all known biological intelligence—is grounded in the physical world.
“If we put AI in a box, it will never have the ability that biological intelligence has as we know it,” he said. “Every other form of intelligence we know is tied to the real world.”
This belief shapes AutoDiscovery’s philosophy: real-world intelligence should be physically embodied, human-speed, and locally constrained.
“If we limit these robots to be independent agents on their own right… that is a challenge we can manage better,” he said. “If we put something like this in the cloud, then maybe we need to circle back to that safety conversation.”
He said robots that learn over time, adapt to their environment, and operate in collaboration with people are the future, but only if their intelligence remains transparent and their limitations are apparent.
Infinite Jobs
While consciousness dominates the philosophical debate, Kisdi is equally vocal about economic anxieties, particularly the notion that humanoid robots will eliminate jobs. He dismisses that idea entirely.
“Jobs are completely infinite,” he said. “If you give me exactly the same number of robots that I have engineers, I will fire precisely zero people. I will do more things.”
He pointed to history to back up the claim. “In the 1900s, we had less than two billion people, and only part of the population was working. Since then, the workforce has increased nearly tenfold, and jobs didn’t disappear—they evolved.”
In his view, automation doesn't shrink the horizon—it expands it.
“We don’t have bases on the moon yet. We haven’t colonized the solar system. There’s plenty left to do.”