AI represents a transformative opportunity for society. The wide-reaching benefits and gains from AI will only be realised through adoption at scale; confidence and trust in the technology is required and is only gained once security and safety concerns have been addressed.
The Laboratory for AI Security Research (LASR) focuses on cutting-edge research, strategic alignment with national objectives and a commitment to social value and inclusivity. Its mission is to mitigate risks to and from AI, drive national resilience in AI skills and position the UK as a global leader in AI security and governance.
Launched by UK Government in November 2024, LASR comprises world-leading experts from organisations including Plexal, Oxford University, The Alan Turing Institute, Queen’s University Belfast, Foreign, Commonwealth and Development Office (FCDO), Department for Science, Innovation and Technology (DSIT) and intelligence agencies to boost UK cyber resilience and foster growth.
Public-private partnership
LASR is a public-private partnership that fosters collaboration, integrating the best minds from academia, industry and government.
Front door for industry
By closing the gap between supply and demand, we aim to accelerate the commercialisation of AI security innovations, bringing novel capabilities to market faster.
Connect with AI security ecosystem
Apply to join LASR Connect community platform to engage with AI security ecosystem and explore news, insights and opportunities.
Collaborate at the LASR Hub
Learn more about the Hub and join a community of innovators, with space to work on solving real-world AI security challenges.
Next gen of AI security
Apply to join upcoming AI security programmes that support innovators in developing and testing new solutions in real-world environments.
Contribute to LASR
Share specific AI/ML adoption challenges your industry faces and real-world examples of AI vulnerabilities to help shape the focus of future innovation.
What we mean by AI security
AI Security refers to the practice of defending artificial intelligence models, applications, and integrated systems against harmful actions by adversaries, which could result in data breaches, operational disruptions, asset damage, or theft of sensitive information. This addresses both traditional cybersecurity challenges and distinct, emerging vulnerabilities tied to AI processes and supply chains—often termed adversarial machine learning.
As AI/ML is rooted in its reliance on randomness — namely, stochasticity and non-determinism, effective AI Security requires thoughtful approaches, including context-driven risk assessments, comprehensive threat modelling, and targeted testing methods. Robust AI Security solutions must encompass not only the AI models themselves but also their interactions with conventional software systems and infrastructures.
The security of AI is the basis upon which AI could be used to enable our economy, security and prosperity.