UN STI Forum speakers warn AI governance risks leaving billions behind
Global South communities face AI systems built around abundant data, reliable infrastructure and English-speaking digital users

Artificial intelligence (AI) can detect disease from medical imaging, translate rainfall data into river-discharge models, and accelerate research timelines that once spanned decades.
What remains unresolved is whether the governance architecture being built around AI addresses the needs of the people it is most likely to leave behind.
At the ministerial session of the 11th Annual UN Multi-stakeholder Forum on Science, Technology, and Innovation for the Sustainable Development Goals (STI Forum) in New York, diplomats and researchers said the debate has focused heavily on the hypothetical risks of frontier models, while a more immediate crisis is structural: AI is being built for contexts of abundance, not scarcity.
Speakers included Koessler, Rita Orji of the United Nations Independent International Scientific Panel on AI, Bjørg Sandkjær of the UN Department of Economic and Social Affairs, and Helmut Habersack of BOKU in Vienna.
Koessler framed the challenge as a mismatch between the speed of emerging technologies and the capacity of international institutions to keep pace.
“Advances in AI, biotechnology, quantum computing and advanced materials are reshaping economies and society much faster than our governance systems have traditionally evolved,” he said, adding that the result was not merely a lag but a “substantial risk.”
“Without appropriate governance, emerging technologies can exacerbate inequality, undermine trust, and outpace the safeguards designed to protect people and public interests,” he said.
He acknowledged that recent steps towards a multilateral AI governance architecture represent an important beginning. He described a UN-based approach grounded in multilateralism, human rights and sustainable development, emphasizing the principle of “putting humans at the center of the effort.”
That effort is taking shape against the backdrop of the Pact for the Future, a landmark agreement adopted by UN Member States at the 2024 Summit of the Future. It was designed to make the international system more inclusive and better suited to 21st-century challenges.
Alongside the pact, Member States adopted the Global Digital Compact, a framework for digital cooperation and AI governance, and the Declaration on Future Generations, which calls for longer-term thinking. The compact focuses on closing digital divides, governing AI for humanity, and protecting human rights online through cooperation among governments, businesses and civil society.
Koessler stressed that declarations alone would not be enough.
“Frameworks alone are not enough,” he said. “They require political commitment, technical expertise and continuous dialogue across countries and sectors.”
He also warned that, with AI and data systems expanding rapidly, rising energy demand must not raise costs, strain grids, or widen inequality.
He called ensuring affordable, reliable, and clean energy for all “a defining test in the final stretch to 2030,” and stressed the need to align digital transformation with energy access so that AI drives SDG progress for everyone, pointing to sub-Saharan Africa as a pressing example.
Three structural gaps
That concern was taken further by Rita Orji, a member of the United Nations Independent International Scientific Panel on AI, who said the dominant AI governance debate is focused on the wrong problems for much of the world.
She identified three structural gaps holding back AI’s potential to serve sustainable development:
Data gap: Many datasets used to train AI systems still under-represent the Global South.
Design gap: Many AI systems assume users are literate, English-speaking and digitally fluent, even though this does not reflect much of the world’s population.
Governance gap: Global AI governance remains heavily focused on frontier risks, while the more immediate problem for many communities is irrelevant AI: systems that may be technically powerful but have little practical value in the places that need them most.
Orji said the focus on frontier risks, existential risk and safety benchmarks should not obscure more immediate barriers, including weak infrastructure, limited language support and the lack of digital records.
In her presentation, Orji used two healthcare scenarios to show why AI systems can fail outside the environments where they are developed:
A well-resourced lab in North America, with powerful servers, stable electricity, clean datasets, and a team of engineers and scientists training models to detect early signs of disease from medical imaging.
A clinic in rural southeastern Nigeria, near where she grew up, with one doctor, intermittent power, paper records, and patients who may need to walk two hours just to reach the facility.
“The AI built in that first lab will work beautifully in that lab and similar contexts,” Orji said. “But take it to that clinic, and it will fail.”
The failure, she said, was not scientific but structural: “No reliable electricity to run it, no digital records to feed it, no internet to update it, no local language to explain it, and no trust built with the communities to accept it and adopt it.”
She said innovation has little value if it is not adopted by the people it is meant to serve, and that too much of what is being built never completes that journey.
The rules underneath
Last December, the UN completed the WSIS+20 Review, a two-decade assessment of progress since the 2005 World Summit on the Information Society.
The review concluded with a consensus resolution that reaffirmed support for a people-centered digital society. The resolution also aligned future digital goals with the SDGs and the Global Digital Compact.
For the AI governance debate, that context matters because the foundation of any equitable framework rests on how data is managed, shared and made accessible.
Bjørg Sandkjær, Assistant Secretary-General for Policy Coordination at the United Nations Department of Economic and Social Affairs, linked the WSIS+20 outcome to the forum’s discussion on data governance, digital public goods and equitable access.
Sandkjær said the Global Digital Compact and the WSIS+20 Review outcomes underscore the need for an open, safe and inclusive digital future that harnesses digital technologies to advance sustainable development while protecting human rights and ensuring accountability.
She described the STI Forum’s discussions on data governance, digital public goods and equitable access as essential contributions to that vision.
“Innovation does not happen in isolation. It thrives in ecosystems built on trust, openness and shared value,” she said.
Sandkjær also pointed to the Technology Facilitation Mechanism, which is anchored in the Addis Ababa Action Agenda and reinforced through the Seville Commitment, the Pact for the Future and the Global Digital Compact.
She highlighted a new assessment by the UN Inter-agency Task Team on Science, Technology and Innovation for the SDGs (IATT) on barriers to the international diffusion of zero- and low-emission technologies. The findings will be compiled into a public document to inform future financing discussions.
AI’s data blind spot
The same concern also appeared in discussions about what AI systems can and cannot detect.
Helmut Habersack, UNESCO Chair on Integrated River Research and head of BOKU Vienna’s Institute of Hydraulic Engineering and River Research, acknowledged AI’s growing power in hydrological modeling but cautioned against treating the tools as self-sufficient.
He said AI models can only work with the information fed into them. General-purpose AI tools can produce useful responses on water issues, but may struggle with future conditions that do not follow past patterns.
“If I ask ChatGPT or other tools now about water, you get nice things, which is really good,” Habersack said. “But the question is: can it also look into the future? Can it take up changes that are not on the pathway from the past?”
He said researchers still need to “put our finger into the water,” because AI cannot explain everything from Earth observation alone.
“We have to do experiments, because we have to get new knowledge,” he said, adding that laboratory work with real-world variables is necessary to produce new equations that AI can then help refine.
Access, Not Ability
Orji closed her keynote with a personal account that connected directly to her technical argument. She described growing up in a remote village in southeastern Nigeria, born to peasant farmers with no access to education. There was no electricity, no pipe-borne water, no computers.
“Before I was admitted to study computer science at the university, I had never used a computer,” she said. “I had never seen one up close, actually.”
She chose the field not because she understood it, but because she hoped it could help her change things for her community. She learned to code and build systems without owning a computer, relying on borrowed machines, university labs and paid computer time in business centers.
She said she shared the story because it is not unusual.
“Across the Global South right now, there are young people with extraordinary talents who are locked out, not because they lack ability, but because they lack access,” she said. “If AI is designed only for people who already have everything, it will never reach the people who could do the most with it.”
Ready for Whom?
Throughout the session, a fault line ran beneath the discussion. On one side was the governance conversation taking shape in capitals and multilateral forums, focused on norms, principles and institutional mechanisms for frontier AI. On the other hand, a harder question surfaced repeatedly across the panel: who will those frameworks actually serve?
Each speaker approached that question from a different angle: Koessler through energy access, Habersack through the limits of data-driven tools, Sandkjær through data governance and digital public goods, and Orji through whether AI systems are being designed for communities most likely to be excluded.
Orji said the conventional framing asks whether developing nations have the infrastructure, skills, and institutions to adopt AI, but the framing itself is the problem.
She said innovators across the Global South are already building AI systems under constraints that most well-funded labs will never encounter: unreliable power grids, sparse datasets and dozens of local languages. Those constraints have produced tools that are not imitations of what exists elsewhere but responses to conditions on the ground.
“Perhaps the question before us today is not whether the Global South is ready for AI,” she said. “Perhaps the real question is whether the global AI future is ready to learn from the Global South.”



