Last Wednesday, we hosted our first LASR Lates event in London at Plexal Stratford. But before we dive into the highlights from our AI security event premiere, let’s recap the activities that led to this point.
In November, the NATO Cyber Defence Conference took place, at which time Pat McFadden MP, Chancellor of the Duchy of Lancaster, said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves.”
And with that, he revealed the Laboratory for AI Security Research (LASR), a public-private partnership bringing together UK Government departments and agencies with innovation and research teams from Plexal, University of Oxford, The Alan Turing Institute and Queen’s University Belfast.
In December, our respective roles were broken down at the official LASR launch event. Our work at Plexal will convene LASR’s multi-disciplinary approach and foster collaboration between key industry players, SMEs and startups from across the global technology ecosystem.

Saj Huq, CCO and Head of Innovation at Plexal, said: “Through this world-class LASR partnership, Plexal will drive the development and commercialisation of breakthrough solutions to enhance resilience of public and private sectors, creating growth vectors for the UK’s tech ecosystem.”
In January, days after Prime Minister Keir Starmer spoke of an ambition to “make our country an AI superpower” we opened our LASR Hub – a shared office space designed to foster alliance and innovation in the rapid-growth field of AI security. Located at Plexal Stratford on the Queen Elizabeth Olympic Park, the LASR Hub is for individuals and teams across the UK working at the intersection of security and AI.
Which brings us to LASR Lates on Wednesday 29th January – an evening exploring the security risks posed to AI and the opportunity for innovation with an esteemed panel of experts from the AI security sector.
Opening the night, Saj took to the stage in Plexal Park. “We started LASR Lates to build the conversation in the AI security space, which is still forming as it bridges the mature legacy markets such as cyber security together with emerging technologies,” he said. “That requires a slightly broader constituency to have the full conversation and drive collaborations. And LASR Lates is one of the tools we’re deploying to bring together the ecosystem and progress the conversation.”
Darren Lewis, Senior Innovation Lead at Plexal, took to the stage next to chair the evening’s panel and was joined by:
- Angela Cross, UK Sales Director Central Government and Defence, Dell Technologies
- Louise Axon, Research Associate at University of Oxford
- Peter Garraghan, CEO and Co-founder, Mindgard
- Josh Collyer, Principal Security Researcher at The Alan Turing Institute

AI security vs cyber security
Setting the scene, Darren first queried how AI security differs from cyber. “We’ve been dealing with traditional cyber threats for 20 years plus and have become highly skilled at handling them,” kicked off Angela. “The challenge now is that we don’t fully understand AI-driven threats yet. It’s a fast-moving space and we don’t always know what to expect. Collaborating with initiatives like LASR and leveraging SMEs’ brainpower, combined with large organisations’ deployment capabilities, is the way forward.”
Building on the AI-driven threats point, Louise said: “People are talking about two halves. One is AI augmenting cyber threats, such as reconnaissance, infecting, enhancing personalisation and phishing; the other is working out what might be new threats like deepfakes and disinformation. But then it’s a question of whether that really falls to cyber security agencies.”
Josh previously noted we shouldn’t be overly afraid of AI security challenges because we’ve been met with cyber challenges in the past. Picking up on this thread, he said: “The hype around AI security makes it seem scarier than it is. The key is recognising traditional cyber security solutions can sometimes address AI threats. However, AI security presents new challenges – such as how to patch vulnerabilities in AI models. Unlike software, where patches are tested and deployed, patching a neural network is a complex problem.”
“People use AI to check AI but who judges the judges?”
Keeping problem complexity front of mind, Darren questioned whether there are existing solutions for this or if it’s a case of taking a step back and leveraging early-stage research and our best academic minds for answers when technologies go beyond human comprehension. For this, he turned to Mindgard’s Peter for his perspective, having spun his business out of Lancaster University.

“Fundamentally, the issues I see in AI security on a day-to-day basis are appsec problems, where software isn’t built properly,” opined Peter. Elaborating, he discussed the concept of LLM as a judge – his self-declared personal bane. “People use AI to check AI but who judges the judges? It’s helpful – but, at some point, you need traceability to track what’s happening. AI has its own biases.”
Risks and rewards: AI within businesses
Of course, what would an AI discussion be this week without moving onto DeepSeek? The Chinese AI platform has dominated international headlines following a sudden surge to the top of app stores – to the detriment of existing offerings – prompting US President Donald Trump to declare this is a “wake-up call” for American firms.
With industry in mind, Darren interrogated whether there are bigger considerations when enterprises are adopting AI solutions, such as supply chains or specific sectors. Oxford University recently created a research paper outlining business considerations with the World Economic Forum, Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards, positioning Louise well to offer insights.

“One of the key recommendations was assess each AI use case and if it’s worth the risks,” she began. “We were also looking beyond the specific vulnerabilities in AI models and how to think through securing the entire AI lifecycle and how that fits in end-to-end into a whole business process. That’s in addition to secure-by-design, ensuring there’s a strategy for monitoring over time with resilience.”
Louise added the governing of AI security within organisations, especially the more complex ones, requires serious consideration. “Which stakeholders might need to be involved? It might be technical teams, it might also be the legal and compliance teams to avoid breaching legislation,” she said.
Indeed, as AI will seemingly deliver benefits across all company departments, the interest doesn’t just lie with IT teams, it’s far broader, which enhances the likelihood of shadow AI projects materialising – running the risk of organisations operating AI in the dark until it’s too late to apply AI security.
With Dell working with large and small organisations, Angela is acutely aware of shadow AI trends. “We see there are too many disparate areas of the business,” she detailed. “Everyone wants to utilise AI to the benefit of their department or company but very few have worked out how to plan for their potential risk profile.”
AI security infrastructure and skills – where do we stand?
“There are certain well-funded UK universities with huge supercomputers and billions of pounds,” Josh started, “but there’s a lot of equally talented academics that lack computational resources for AI security research.”
Beyond infrastructure and onto skills, he added the UK is blessed with the best universities, lots of AI talent in London and further afield. However, the issue is a breakdown in knowledge sharing and collaboration of skillsets.

“To Peter’s point around data scientists making systems with no idea about security, a lot of cyber security people think AI is smoke and mirrors – but they have the skills; they’re half of the puzzle almost,” said Josh. “So, they need to team up with the AI experts to work out problems. I’m optimistic the skills issue will be solved by people talking to each other.”
This reinforces Angela’s earlier call for collaboration across initiatives like LASR, SMEs and large firms in order to tackle the unknown.
The light at the end of the tunnel
Looking ahead at what the future holds with a glass half full, Darren’s closing question prompted our speakers to share what they’re most excited about for AI security.
“If we can protect people in the way organisations need to, you can innovate so much, making people’s lives easier and dreams to be fulfilled, opening up huge possibilities and a world of opportunity,” said Angela.
Not quite as optimistic, Peter followed: “You’ll never fix the problems with security.” On a more upbeat note, he added: “But my view is, when we treat AI security problems with equal knowledge and worry as we do other non-AI cyber security problems, that will be an achievement.”
Louise shared: “If we get AI security right, the opportunities for enhancing education programmes, cyber security tooling and broader awareness are exciting.”
Closing off the conversation, Josh concluded: “A lot of companies may be bad at cyber security but we’re not getting ransomware every day so we’re doing something right. For organisations where the security of AI is the big issue, I’m excited to see what they do with it when they adopt AI. There are big corporates that may become cool and slick. Then smaller companies may adopt AI to create products we’d never thought about to take on big players.”
If you couldn’t make the LASR Lates London launch, we have good news – we’re going on tour! So, book in and we’ll see you in Manchester for our next event on Wednesday 26th February and in Cheltenham on Wednesday 5th March.
