The Paris AI Action Summit is over, and the scene set for the subsequent one, which will be hosted by India. This year’s Summit followed previous meetings in Bletchley Park, UK (2023), and Seoul, South Korea (2024). Ostensibly, the convenings aimed to agree on guardrails for the future of AI.
The Paris AI Summit characterized the times in which it was held — one of fragmenting geopolitics, divergent priorities, and the growing pursuit of sovereign interest over collective good.
Many observers concluded that not only was the Summit more of a trade fair for Paris’s AI ambitions — with French President Macron’s ‘plug baby plug’ a, umm, plug for the country’s nuclear industry, but it was also a failure on its own terms. Although more than sixty countries, including China, India, and Germany, signed on to a declaration that committed to “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all,” both the US and UK refused to do so, with US Vice President J. D. Vance saying the United States would work to retain its dominance in AI.
There were bright spots. The commitment to Current AI, a $400-million fund focused on building “open, people-first technologies,” offers hope, yet the initiative will have to demonstrate how it can have ecosystemic impact and bend the arc of AI development toward the good.
My main takeaway from the AI Summit is that, despite the increasingly centralized, mega investment focus of Big Tech AI that seems to be rushing headlong to doomsday land, the most interesting developments are in fact the opposite: away from the big companies, where efforts focus on shifting power toward people.

I saw this at the African AI Village, an independent meeting convened by a coalition of leading African organizations, including Qhala and Smart Africa, and supported by the likes of Mozilla, CEIP, and others. Three themes struck me:
- Decentering and decolonizing power:
Calls to stop asking for permission and instead to build African AI models, applications, and skills are growing increasingly confident. We can’t underestimate the importance of China’s DeepSeek AI models here in demonstrating that the AI field can be disrupted. - Decentering principles of data governance and turning to Indigenous forms of knowledge:
The Maori Data Governance Principles are an example of this approach. The Maori Principles have also informed work Caribou has done with UNDP on governance frameworks for digital ID and forthcoming on data exchange and digital ID. - Dependency on metal, especially data centers, to maintain sovereignty:
Africa has only 155 data centers. In 2023 these generated less than 1 megawatt (MW), compared to 88 in the US and 73 in the EU. Analysts suggest that Africa desperately needs as much as 1,000 MW and 700 facilities if African AI is going to be independent of the hyperscale cloud and computing providers.
The other theme that stood out in this technology-focused Summit is people. In the headlong rush to greater investment, including in AI for defense, the struggle to put people first continues. The Computer Says Maybe live-streamed review of the Summit captured this trend.
- Where are all the people? There was a general consensus that the Summit had not done enough to focus on people or risk. The few references to people, such as the launch of Current AI, the public interest AI fund, were welcome, but not enough.
- The challenge of mobilizing public engagement — and protest — was also flagged as a barrier to influence and advocacy. One explanation for this was the challenge of “showing the bodies”; in contrast to other digital technologies where harms are readily apparent, the harms from AI are difficult to link directly to the technology.
- Increasing emphasis on developing AI diagnostic tools and methods to strengthen AI accountability led to a discussion about the role of more constructive engagement: the diagnostician vs. the clinician. The former simply identifies problems, while the latter provides constructive solutions. Importantly, there was widespread agreement among panel members about the importance of criticism, with one panel member noting that “criticism is service.”
But the big elephant in the room was geopolitics. Not just the geopolitics of AI — with US Vice President Vance “warning Europe to go easy on regulating US tech firms” — but the broader geopolitical context in which AI sits. The very rapid changes taking place in the United States, the decline in multilateral approaches to addressing global challenges, including but not only AI, are factors that will shape the future role of digital technologies in politics and everyday life. So we must take much greater account of the geopolitics in which technologies exist if we are to address the challenges of power.
Authors
Dr. Emrys Schoemaker
Follow Dr. Emrys Schoemaker on LinkedInSenior Director, Advisory & Policy
See More by Dr. Emrys Schoemaker