LASR Validate, our first programme designed to support innovators with their development of AI security products, is officially underway. It forms part of our work within the Laboratory for AI Security Research (LASR), the public-private partnership launched by UK Government last year.
LASR’s mission is to mitigate risks to and from AI, drive national resilience in AI skills and position the UK as a global leader in AI security and governance. Plexal’s role is to accelerate the commercialisation of AI security innovations, bringing novel capabilities to market faster and LASR Validate is one such method we’re applying to achieve this.
Over a six-week period, LASR will fast-track industry validation of AI security product opportunities across a range of key sectors such as financial services, telecoms, defence and national security.
The AI security SMEs selected will learn about key challenge areas across key sectors, can collaborate with one another and propose a proof of concept to test a new AI security capability that addresses real-life challenge areas.
Lucy Coutts, Plexal’s Innovation Lead on the LASR Validate programme, said: “It was a pleasure to convene our talented LASR Validate cohort to Plexal Stratford this week. We heard all about their ambitions, which spanned everything from peer collaboration and knowledge sharing to tackling real-world adversaries.
“Situating ourselves within the newly opened LASR Hub, a shared workspace designed to foster alliance and innovation in the rapid-growth field of AI security, made for the perfect environment in which to build our community.
“The chosen SMEs have a wealth of incredible developments and concepts to offer and we look forward to supporting them to the next level, providing the opportunity for them to access industry and government partners.”

Having attended the recent AI Action Summit and been part of the global conversation, Saj Huq, Plexal CCO, said: “Whilst I recognise the need to sharpen the global focus on the opportunities of AI, there was clearly significant appetite at the AI Action Summit to engage on deeper conversations around emerging cyber, privacy and national security challenges that are also associated with increased AI adoption. The role of the startup and SME ecosystem in shaping the future of AI can’t be forgotten.”
The LASR Validate cohort includes:
Aeris-UK
Aeris-UK bridges AI, advanced modelling and operational needs, emphasising modularity, cost-effectiveness and reliability for deployment across diverse contexts, from battlefield operations to critical infrastructure protection. The team has developed an innovative capability called SATORI – a simulation tool designed to analyse vulnerabilities, quantify risks and enhance system resilience.
eCora
eCora has created a secure-by-default platform for new or existing applications, which can be wrapped into a security container that allows them to be deployed into untrusted or hostile environments. It uses underlying hardware innovations in trusted computing and confidential computing, enabling workloads to be deployed as a black box that can be used as intended but with no ability for users or hackers to see inside.
Fendr
Fendr launched having witnessed sensitive code leak into a large language model, recognising that while generative AI tools are accelerating the way users understand, write and debug code, they’re also creating data vulnerabilities and scope for cyber attacks. The company builds tools that enable secure AI usage by intelligently monitoring and protecting data.
Fuzzy Labs
Fuzzy Labs is focused on advancing and innovating open-source machine learning operations (MLOps) solutions to streamline AI model deployment and make a positive impact. The aim is to empower data scientists to productionise AI models and make it easy for them to collaborate with one another, working more efficiently together with time efficiencies and fewer errors.
Pytilia
Pytilia acknowledges that as AI models are deployed into business-critical functions, there’s still a need for human analyst review of the output or any anomalies detected, due to high volumes of false positive alerts, leading to inefficient processing and excessive analyst workloads. Pytilia is working to solve this with a feedback loop engine to reduce the false positive alerts created by AI models, which will learn characteristics of human analyst feedback to filter and prioritise alerts.
Syncrosis
Syncrosis has developed the Helios Matrix, its flagship technology platform designed to unify fragmented intelligence systems, deliver real-time situational awareness and empower decision-makers with actionable insights. It detects and neutralises malicious inputs such as poisoned queries or prompt injections and secures AI systems across their entire lifecycle, integrating continuous data validation and tamper-proof pipelines to eliminate vulnerabilities.