Apple Siri Co-Founder Backs Humanistic AI to Augment, Not Replace Workers
The debate over humanistic AI intensifies as businesses weigh whether to use intelligent systems to augment people rather than replace them

Artificial intelligence (AI) can now write code, draft legal contracts and converse with striking fluency. What remains unresolved is how corporations will ultimately use these capabilities to tackle difficult human problems or simply to cut labour costs and boost short‑term profits. That dilemma sits at the heart of a growing debate over the future of AI deployment.
Apple’s Siri was originally conceived as an example of “humanistic AI,” a design approach aimed at augmenting human capability rather than automating people out of the workforce, according to its co‑founder Tom Gruber. As AI systems grow more powerful, Gruber warns that companies risk betraying that vision if they treat the technology primarily as a tool for eliminating jobs.
“The human job is context, understanding, and communication with humans,” he said in a recent event in Hong Kong. “The AI job should be doing a sub‑task that humans can define well, and especially can evaluate well.”
Gruber warned that poorly directed AI initiatives within companies often push middle managers toward the easiest cost‑cutting targets, typically staff reductions, rather than using the technology to create new value. He described that approach as a cowardly misuse of AI’s potential.
The universal interface
Gruber has long framed AI not as a replacement for humans but as a universal communication tool capable of expanding access to knowledge.
His interest in conversational computing began decades before modern large language models emerged. With a background spanning psychology and computer science, he began building a rudimentary language system in 1983 to help non‑verbal individuals, similar to the late physicist Stephen Hawking, generate speech.
“The idea that you could speak in your own language, with your own voice and skill... means it’s a universal interface,” he explained.
Gruber shared his views during a fireside chat at Technology for Change Asia 2026, where industry and government leaders gathered in Hong Kong to explore how emerging technologies could reshape business and society. The event organised by Economist Impact was held on March 11–12.
The session examined the evolution of AI assistants from the symbolic programming approaches of the 1980s to the massive neural networks that dominate today.
By removing the need for traditional computer literacy, voice‑based AI could empower populations that might otherwise be left behind.
Tom Standage, deputy editor of The Economist and moderator of the session, pointed to examples in India where farmers using basic feature phones access agricultural advice through voice‑driven AI tools without needing to read or type.
Gruber described this approach as a “big brain, small screen” architecture, allowing powerful AI models to operate behind simple interfaces and extend services such as healthcare guidance, education and conversational support to anyone with access to a mobile network.
Beyond the ‘super tool’ threat
The discussion also addressed the risks associated with the rapid spread of powerful generative AI tools.
Gruber warned that bad actors now possess what he called “super tools” capable of generating deepfakes, fraud campaigns and large‑scale misinformation. He said attempts to embed simple ethical rules into AI systems, similar to Isaac Asimov’s fictional laws of robotics, are unlikely to work with modern language models.
Standage added that mitigating those risks requires what he described as a “defense in depth, whole-of-society approach,” arguing that regulation should focus not only on AI systems themselves but also on restricting dangerous uses of their outputs.
Gruber said technological safeguards must be reinforced by legal protections.
“If you allow rampant use of data impersonating humans at scale by anyone, the trust basis for financial services and health care will collapse,” he warned, calling for enforceable laws against digital impersonation.
The personalised future
In a joint statement released on January 12, Apple and Google announced a multi‑year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology.
The two technology giants said the models will power future Apple Intelligence features, including a more personalised Siri expected later this year, while continuing to run on Apple devices and Private Cloud Compute in line with Apple’s privacy standards.
Standage noted Apple’s decision to integrate Google’s Gemini model into Siri, likening the upgrade to a “brain transplant.”
Gruber said he was “very excited” about the development, predicting it could transform the assistant from a simple task manager into a far more capable conversational partner.
Looking ahead, he envisioned a future in which personal AI systems maintain extensive conversational histories with their users, enabling them to anticipate needs and provide highly personalised assistance.
Toward the end of the discussion, Gruber struck an optimistic note about the broader potential of human‑centered AI.
“Now came a pretty profound opportunity in front of us,” he said. “Now is the time to really build more space and solve those big problems you want solved.”



