Scientific Laureates Question Whether AI Can Replicate Einstein-Level Insight
Nobel and Turing laureates say AI may accelerate discovery but lacks human curiosity, judgment and emotional drive behind scientific breakthroughs

Artificial intelligence can optimise chemical reactions, predict weather patterns in seconds, and accelerate data-heavy research at unprecedented speed. What remains unresolved is whether it can ever replicate the intuitive leaps that underpin the greatest scientific breakthroughs.
That question took centre stage at the AI Science Forum during the World Laureates Summit in Dubai, where Nobel laureates, Turing Award winners, and senior researchers gathered to assess how machine learning is reshaping — and potentially constraining — the scientific method.
Chaired by Professor Tony Fan-cheong Chan, former president of King Abdullah University of Science and Technology, the forum posed a deliberately provocative challenge: can a system trained on existing data generate ideas that fall outside established frameworks?
To frame the debate, Chan invited the panel to consider a counterfactual. If a machine were fed every known dataset available before 1905 — but stripped of human intuition — could it have produced Albert Einstein’s theory of general relativity?
“Because these data were not in the dataset,” Chan argued, suggesting that paradigm-shifting discoveries require conceptual leaps beyond statistical inference.
“If someone else besides Einstein had the best AI model,” he asked, “would that person be able to discover general relativity? Or quantum mechanics?”
The ‘polite’ collaborator
While the theoretical limits of AI remain debated, its practical application in the laboratory is already reshaping the speed of discovery.
Omar Yaghi, the 2025 Nobel Laureate in Chemistry and Professor of Chemistry, University of California, Berkeley, detailed how his team at UC Berkeley is using large language models (LLMs) to accelerate the creation of Metal-Organic Frameworks (MOFs)—materials used to harvest water from desert air.

By integrating AI into the experimental cycle, Yaghi said his students have achieved crystallization results three times better than the community average over the last decade.
“It has changed my entire group from just an experimental group into a group that blends AI with AI tools,” Yaghi said. He noted that AI agents allow new researchers to bypass the “geologic scale” of traditional training, accelerating their ability to contribute to high-level science.
However, Dr. Jayant Haritsa, Senior Professor of Computer Science and Automation, Indian Institute of Science, offered a counterpoint regarding the quality of AI mentorship. He argued that while AI can provide breadth—turning “T-shaped” researchers (depth in one area, breadth in many) into “Pi-shaped” scholars (depth in multiple areas)—it lacks the critical edge of a human advisor.
“AI can never succeed because it’s too polite,” Haritsa said. “Most good research comes from tough love... if you have something that is always polite, it encourages you to be lazy.”
Energy and Emotion
The forum also highlighted the physical and biological constraints that differentiate silicon intelligence from carbon-based curiosity.
Dr. Hesham Omran, UNESCO–AI Fozan International Prize for the Promotion of Young Scientists in STEM Laureate (2023), warned that the scientific community is putting “all the eggs in one basket” by pivoting so heavily toward AI. He contrasted the massive energy consumption of modern data centres—which are “melting our grids”—with the extreme efficiency of the human mind.
“The human brain, a miracle created by God, is consuming just 20 watts of power,” Omran said, questioning the sustainability of achieving Artificial General Intelligence (AGI) if the energy costs become prohibitive.
Serge Haroche, Nobel Laureate in Physics (2012) and Professor of Quantum Physics, Collège de France, added that the human brain possesses a critical variable that machines lack: emotion. Haroche argued that scientific discovery is driven by a visceral need to understand the world, a quality distinct from data processing.
“I don’t see how AI can have this kind of quality,” Haroche said. “It will help make discoveries as a very powerful tool, but you will always need to have something behind that drives you.”
Beyond the ‘Silicon Valley nonsense’
The discussion also served as a reality check on the terminology often used to hype the technology. Professor Michael I. Jordan, World Laureates Association Prize Laureate in Computer Science or Mathematics (2022) and Professor Emeritus, University of California, Berkeley, intervened from the audience to dismiss the concepts of AGI and Artificial Super Intelligence (ASI) as “Silicon Valley nonsense” designed for venture capitalists.
Jordan argued that the goal of the field is not to create a sentient mind, but to build reliable, truth-seeking workflows that integrate with human economics and science.
“If you told me that we didn’t reach some mystical goal of AGI in 10 years, was that a disappointment? I’d say no,” Jordan said.
The evolving method
Despite scepticism about AI’s ability to replace scientists, the consensus was that the scientific method itself is undergoing a permanent evolution.
Robert Tarjan, Turing Award Laureate (1986), known for his work on graph algorithms and Distinguished University Professor of Computer Science, Princeton University, noted that while AI currently “hallucinates,” it offers a potential future where it can act as a rigorous check on human logic, finding missing cases and building counterexamples.
“Asking the right question is more important than finding the answer,” Tarjan said, emphasising that while AI can process answers, the formulation of the question remains a distinctly human privilege.
Concluding the session, Professor Chan struck an optimistic tone, suggesting that while the integration of these tools is inevitable, the focus must remain on teaching the next generation to use them with proper judgment.
“No doubt it’s a tool that we can all use in a very efficient way,” Chan said, urging educators to ensure students are trained correctly. “I think the future is bright.”




