AI Action Summit and AI Fringe: The race for AI innovation needs security

Following the UK’s AI Safety Summit in November 2023 – the first global AI event of its kind – the Paris-hosted AI Action Summit has taken place this week, complemented by a London AI Fringe event, and Plexal has attended both moments on either side of the Channel.

At the start of 2025, Prime Minister Keir Starmer made it clear in his AI Opportunities Action Plan speech that the UK Government is committed to leveraging AI technology to provide benefits to British citizens and organisations alike. He highlighted real examples, touching on improving healthcare and tailoring educational needs to enhancing construction and making tasks more efficient across the board.

“I am determined the UK becomes the best place to start and scale an AI business. That will be the centrepiece of our Industrial Strategy,” the Prime Minister said. “There has never been a better moment for entrepreneurs with big ideas to grow a small company fast.”

At Plexal, we believe this is encouraging. We’re actively engaging SMEs operating at the intersection of AI and national security through our role as a delivery partner of the Laboratory for AI Security Research (LASR), so any steps to ensure their success and acceleration is a step in the right direction.

An AI Action Summit development that’s surfaced has seen the UK and US opting out of signing an international AI agreement at the Paris event. A UK government spokesperson reportedly commented: “We felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”

Commenting on his time in Paris attending the AI Action Summit’s business-focused day, Plexal CCO Saj Huq said: “Throughout my time at the AI Action Summit held at Station F, there have been many conversations spanning emerging policy, technology, business adoption and the role of governance, security and safety in mitigating the risks whilst maximising the benefits to societies and economies around the world.  

“However, whilst I recognise the need to sharpen the global focus on the opportunities of AI, there was clearly significant appetite at the AI Action Summit to engage on deeper conversations around emerging cyber, privacy and national security challenges that are also associated with increased AI adoption. It’s important that the global community doesn’t move past or suppress them, as several of these challenges are both near and present today. The role of initiatives such as LASR are critical to address and bridge towards a secure and resilient, AI-enabled future for all.

“From the work we’re doing to engage with AI security enterprises to the discussions I had in Paris, the role of the startup and SME ecosystem in shaping the future of AI can’t be forgotten. Likewise, this is a complex and fast-moving space, where binary trade-offs between innovation and regulation can be overly simplistic; rather we need to consider risk-based approaches and the overall interoperability between various global standards, to ensure that we can fully harness the economic potential of this once-in-a-lifetime technology shift.  This requires ever closer, global dialogue and a focus on collaboration between governments and industry around the world.” 

Elsewhere, our Senior Innovation Lead, Darren Lewis – host of our recent LASR Lates panel – attended the AI Fringe in London, enabling Brits to mirror debates from the AI Action Summit.

A cautionary tale from one conversation suggested that if computer intelligence continues to increase drastically beyond that of humans across the next two years, it could be like “unfiltered immigration into your country with regards to jobs taken.” This comment reinforces the importance of security – which can’t be sidelined to chase opportunity. Fortunately, many UK Government schemes are focused on that very element and this positions us well for enhanced exploration and alignment together.

Sharing his thoughts, Darren said: “It was a pleasure to join Milltown Partners’ AI Fringe event and join the roundtable discussion on the National Data Library, which was organised expertly by Imogen Stead and facilitated by the Startup Coalition.

“My key takeaway from the AI Fringe event and AI Action Summit alike is that whilst the US is keen to push for AI innovation and speed, security, safety and ethical considerations still need to be championed. I don’t think one has to mean a trade-off for the other, we can innovate at speed and do it securely but neglecting either will lead us to either fall behind or be open to risks. Let’s not fall into the same traps that led to the internet becoming open to threats in the first place as the AI age takes hold.” 

And just today, Friday 14th February, another step to double-down on security has seen the AI Safety Institute renamed the AI Security Institute. A UK Government announcement notes the revised name aligns with the organisation’s “focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks.”

Peter Kyle, Secretary of State for Science, Innovation, and Technology, detailed: “The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change. 

“The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.”