Call us
COMMENTARY

The Paris Summit: Au Revoir, global AI Safety?






Digital / COMMENTARY
Giulia Torchio , Francesco Tasin

Date: 14/02/2025

This week, Paris hosted the third global Artificial Intelligence (AI) Summit. Political leaders, tech CEOs and experts from around the world flew into the French capital to discuss an extended agenda on AI's political, cultural, societal and economic impact. To conclude the event, 60 actors signed a “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet,” aiming to align AI development with progress towards the sustainable development goals (SDGs).

However, the US and the UK, two leading AI nations, refused to sign the statement. This was not entirely surprising given that earlier this week, US Vice-President Vance criticised excessive AI regulation and reiterated the US’s commitment to a “hands-off” approach. A spokesperson for UK Prime Minister Keir Starmer commented that the specific initiative was not in the country’s interest. These snubs suggest stark changes in the narrative on AI safety and international AI governance.

From Bletchley to Seoul – first hopes of a global effort

The first AI Safety Summit took place in late 2023 in Bletchley Park, Buckinghamshire. The objectives of the gathering were to kickstart a multilevel conversation on the risks associated with AI technologies, especially regarding frontier models, and discuss potential internationally coordinated actions to mitigate them.

The UK AI Safety Summit had multiple concrete outcomes. Most notably the Bletchley Declaration on AI safety, which saw 28 countries (including the US and China) agreeing to promote safe, human-centric, trustworthy and responsible AI; the creation of the first AI Safety Institutes (AISIs), spearheaded by the UK and the US; and the announcement of a “State of the Science” report on the capabilities and risks of frontier AI ahead of the next summit.

Particularly significant was the creation of the AI Safety Institutes (AISIs). While not strictly able to regulate, these institutes cooperate with governments and industry to inform regulatory strategies, conduct AI safety research and model evaluations, and lay the foundations for international AI governance.

After Bletchley, came the Seoul Summit and AI Global Forum, in May 2024. This “mini-summit” aimed to reaffirm commitment to AI safety, present the safety measures underway, and expand the discussion to include innovation and inclusivity. “Grandfather of AI” Yoshua Bengio also presented an interim version of his International Scientific Report on the Safety of Advanced AI, exploring the potential of general-purpose AI, its risks and mitigation strategies.

Despite declining public interest, the Seoul Summit still bolstered global AI governance. Over the two-day event, 10 countries and the EU pledged to establish AI safety institutes and an international network to enhance coordination. Japan, South Korea, Singapore, France and Canada have since launched their institutes, while the EU announced that its newly established AI Office would fulfil that role.

While attending the summit, China notably abstained from such commitments. Nevertheless, two Chinese companies, including the country’s largest large language model (LLM) provider, joined other AI businesses in signing commitments to implement risk identification, prevention and transparency measures.

Then came Trump and Paris: Au Revoir, global AI Safety?

French President Emmanuel Macron had big and commendable ambitions for the third AI Summit, broadening its agenda beyond strict AI safety to risks and opportunities of AI development in other domains such as climate change and energy use. The programme centred on five main themes: Public interest in AI, the future of work, innovation and culture, trust in AI and global AI governance.

It was also an attempt to reposition Europe as a leading actor in AI development and deployment and attract investments. The day before the opening, Macron announced that France would receive €109 billion in private AI infrastructure investments, followed by the launch the next day of the EU AI Champions Initiative, combining €58 billion in public funds with the pledge of 70 European companies to spend €150 billion in AI R&D.

While welcome, these grandiose announcements should not overshadow the real failures of the Paris Summit. As criticism from many experts and attendees noted, under the impulse of the US-China race for AI dominance, we are witnessing a dramatic acceleration and shift away from AI safety. Tellingly, the final declaration of the Summit included no substantial commitments to AI safety, despite the publication of the finalised International AI Safety Report 2025.

The US and UK’s decision to reject the final statement of the summit raises serious questions about the future of global AI governance. International alignment on AI is cracking under the pressures of Trumpian politics, with the American delegation specifically objecting to references to AI existential risk, environmental impact, and also a role for the UN.

Giving up is not an option: A three-pronged strategy

Whether this shift marks the death knell for international AI governance and diplomacy is yet to be seen, but the UK and US’s refusals signal a troubling crisis of multilateralism. As the international stage fast becomes a realm of contestation and confrontation rather than convergence and collaboration, Europe must double down on a three-pronged approach of ambition, partnership and enforcement.

First, Europe must seize upon the new growth-focused international narrative around AI as an opportunity not a threat. The shift from a safety-only approach to a safety-and-growth approach aligns with Europe’s ambitions for a twin digital and green transition. The disregard across the Atlantic for the environmental and social costs of AI is also an opportunity for Europe to stand out in the competition for talent and funding.

Second, determined engagement in the AI summit series remains the best hope for coordinating internationally on AI regulation and safety. Many leading nations, including Brazil, India, and China, have acknowledged the risks posed by AI technologies. Despite its limitations, the summit series can and must continue to facilitate cooperation and capacity building with like-minded partners.

Third, Europe must stand its ground on AI regulation and enforcement. Elon Musk and J.D. Vance may attack European laws but backing down on the Digital Services Act and the General Data Protection Regulation (GDPR) or withdrawing the AI Liability Directive is not the right way forward. As AI grandees make robust warnings that laws like the EU’s AI Act are insufficient, Europe’s international role and clout are also inexorably linked to staying on the right side of history in dealing with global threats.



Giulia Torchio is a Policy Analyst at the European Policy Centre.

Francesco Tasin is an AI Fellow at the European Policy Centre.

The support the European Policy Centre receives for its ongoing operations, or specifically for its publications, does not constitute an endorsement of their contents, which reflect the views of the authors only. Supporters and partners cannot be held responsible for any use that may be made of the information contained therein.







Photo credits:
Ian LANGSDON / AFP

The latest from the EPC, right in your inbox
Sign up for our email newsletter
14-16 rue du Trône, 1000 Brussels, Belgium | Tel.: +32 (0)2 231 03 40
EU Transparency Register No. 
89632641000 47
Privacy PolicyUse of Cookies | Contact us | © 2019, European Policy Centre

edit afsluiten