This week, the Laboratory for AI Security Research (LASR) team travelled to the RSA Conference 2025 in San Francisco, engaging with the global ecosystem to strengthen international cooperation on AI security.
LASR, the public-private partnership comprising UK Government, Plexal, Oxford University, The Alan Turing Institute and Queen’s University Belfast, is committed to fostering an ecosystem that drives research, accelerates innovation, informs policy and delivers real-world impact in AI security.
Achieving this requires collaboration with a broad range of actors across industry large and small, as well as investment, academia and government, to shape the future of secure AI.
At RSA Conference, LASR hosted a roundtable discussion convening international partners and thought leaders to explore shared challenges and opportunities in securing AI on a global scale. During this event, we were delighted to be joined by Anne Keast-Butler, Director of GCHQ, a key strategic partner of LASR.
Together, we examined opportunities at the intersection of AI, cyber security and national security. The discussion reinforced the need for deeper collaboration to address unsolved AI security challenges and to drive wider adoption of AI security solutions in the market, with key takeaways covering:
The international security conversation
AI is dominating national security conversations in the UK and US, with shared concerns around the pace of technology change and the need to balance AI deployment with robust security.
Bridging national security and commercial markets is essential
There’s a need to shorten the feedback loop between national security requirements and industry innovation. Initiatives like LASR and NSSIF (National Security Strategic Investment Fund) in the UK, alongside the DIU (Defence Innovation Unit) in the US, are leading efforts to enable public-private collaboration.
Acceleration of innovation
AI security innovation is accelerating, with investment levels near doubling year-on-year. The sector is becoming increasingly attractive for venture capital, with the fusion of cyber, AI and critical national infrastructure seen as a key opportunity space for investment and technology development.
Demand shaping is required
Shaping the demand for secure AI is critical. AI security will be central to unlocking AI adoption at scale. However, uptake of AI security solutions remains limited. Strategies are needed to incentivise the ecosystem to adopt secure AI as standard or procure tools that support its secure use. Here, governments can play a catalytic role through bold procurement frameworks, standard-setting and framework development.
A mix of minds
We need to bring the right mix of minds and experience together to tackle these challenges. Technical convenors like LASR and The AI Security Institute (AISI) can ensure that research funding is directed towards real-world use cases and that diverse experts are consulted on complex problems.
The conclusion?
AI security is complex and cross-cutting but this presents an enormous opportunity. With the right blend of collaboration, policy, commercial strategy and convenors such as LASR, the opportunity and demand for secure AI adoption across critical national infrastructure is only going to grow.
Saj Huq, CCO at Plexal, who hosted the discussion, said: “In the six months since LASR launched, we’ve enthusiastically taken the AI security discussion nationwide. This year’s RSA Conference theme was ‘Many Voices. One Community’, proving itself to be the ideal location for scaling this conversation onto the global stage in a focused environment with the right mix of minds to drive all-important change.
“As AI security innovations continue to develop, we’re committed to supporting commercial progress which supports both national security and economic interests. We’re thankful to Anne from GCHQ, our fellow LASR partners and guests from across the UK and US technology ecosystem for committing the time to the roundtable. We look forward to continuing this dialogue in the months ahead.”