Understanding company risk appetites, rise of the Chief AI Officer and security refocusing

In keeping with our work on the UK Government’s Laboratory for AI Security Research (LASR), the second edition of LASR Lates, our AI security panel and networking event series, has taken place in the Greater Manchester Digital Security Hub (DiSH) – home of Plexal’s team in the North West.

LASR includes UK Government departments and agencies with innovation and research teams from Plexal, University of Oxford, The Alan Turing Institute and Queen’s University Belfast. Specifically, Plexal’s role is to foster collaboration between key industry players, SMEs and startups from across the global technology ecosystem.

The Manchester LASR Lates event follows our London launch in January, where we held the event at Plexal Stratford on the Here East campus. Discussing the AI security landscape, the kick-off offered insights including:

“Fundamentally, the issues I see in AI security on a day-to-day basis are appsec problems, where software isn’t built properly” – Peter Garraghan, CEO and Co-founder, Mindgard.

“Which stakeholders might need to be involved? It might be technical teams; it might also be the legal and compliance teams to avoid breaching legislation” – Louise Axon, Research Associate at University of Oxford.

“Collaborating with initiatives like LASR and leveraging SMEs’ brainpower, combined with large organisations’ deployment capabilities, is the way forward” – Angela Cross, UK Sales Director Central Government and Defence, Dell Technologies.

“A lot of cyber security people think AI is smoke and mirrors – but they have the skills; they’re half of the puzzle almost” – Josh Collyer, Principal Security Researcher at The Alan Turing Institute. 

Much has happened since that event – which took place fresh from DeepSeek making its presence known. The AI Action Summit in Paris discussed security and safety of AI to mitigate risks whilst maximising benefits to societies and economies worldwide, while the AI Safety Institute was renamed the AI Security Institute.

As the AI security conversation continues to evolve, it’s important to keep engaging in the dialogue and our LASR Lates session in Manchester brought together the following speakers to do just that: 

  • Ruby Motabhoy – Senior Lead, National Security, Plexal (Moderator) 
  • Simon Cook – Professor in Practice, Data & Cyber Research, Lancaster University 
  • Matt Squire – CTO, Fuzzy Labs 
  • Sarah Schlobohm – Chief AI Officer, zally 

“This conversation couldn’t come at a better time, where the opportunities that AI presents are in a fine balance with the potential risks this evolving technology is introducing,” Ruby kicked off.

“At the start of the year we saw the government’s rollout of the Plan for Change focused on mainlining AI into public services and driving efficiencies. Shortly after, the UK’s AI Safety Institute renamed to the AI Security Institute, reflecting the global threat landscape, where hostile actors may look to use AI to target our core institutions and ways of life. I’m delighted to be joined by a fantastic panel to look at this from all angles and identify the opportunities for the wider ecosystem to get involved with one of the biggest technological evolutions in our lifetime.”

What’s your risk appetite?

Given the media and industry furore driven by DeepSeek last month, Matt used this as a jumping off point. “AI security is increasingly important, especially within cyber,” he said. “Unlike traditional software, you can’t inspect large language models (LLMs) to fully understand their behaviour. Unexpected outputs may occur due to unpredictable prompts, modified training data or even supply chain attacks. 

“When it comes to trust, it’s about more than just DeepSeek – you can trust DeepSeek to the extent you can trust anything. It’s about building trust in general. People need to ask: Where have I got the tool from? Do I trust the source? Do I trust the information? Have I checked for vulnerabilities? It’s really a question of: what’s your risk appetite?”

As Ruby put it: “We should all be kicking the tyres a bit more to ensure we know what we’re getting ourselves into.”

Introducing the Chief AI Officer

Having been with zally since the start of this year, entering the newly created role of Chief AI Officer, Ruby was keen to understand Sarah’s position and thoughts on where it stands more broadly in the technology ecosystem.

“I think the Chief AI Officer is a role we’ll see more of in the future, just as Chief Data Officers emerged a few years back,” Sarah expressed. “We’re at the peak of the AI hype wave so, especially when you have an AI-based product like zally, you’ll see more Chief AI Officers. Of course, not every company will need one – the same way a Chief Data Officer isn’t required everywhere.”

Although the role hasn’t reached its final form, the birth of the Chief AI Officer comes with opportunity. “The Chief AI Officer role isn’t clearly defined yet, there’s no playbook, so it’s a case of figuring out what it entails,” Sarah continued. “Is it a focus on AI products, AI in business, AI compliance or AI security? The answer is probably all of it. As we’ve seen with traditional machine learning models, they don’t exist in isolation, so we need a framework to guide how the roles and technology should function.”

AI security and academia

Complementing the private sector perspectives, Simon lifted the lid on what he’s seeing from his post at Lancaster University. “Our approach is to use academia and industry as force multipliers to help the government solve security problems,” he detailed, first pointing to NW CyberCom. This initiative is a collaborative effort between the universities of Lancaster, Manchester, Salford, Liverpool, Manchester Metropolitan University and the University of Central Lancashire, which is aiming to unlock the cyber security potential of the North West, with partners including Plexal, CRSI and MIT.

“NW CyberCom is a pilot to commercialise security projects. We’ve funded 17 research projects, each receiving around £15,000 for proof-of-concept validation. We also have innovators-in-residence—successful founders who mentor academics. Our strategic ecosystem work, guided by MIT’s Regional Entrepreneurship Accelerations Program framework, is helping create a Northwest Cyber Corridor.” 

Matt supported these efforts, saying it’s important academia and industry work together, as AI-specific cyber security is still novel. With this in mind, it was noted that LASR has developed a problem book incorporating as many as 50 security challenges, some of which are academic, while others address ethical and legal protections. Based on the rapid rate of change, this is an opportunity for helping to regulate AI security.

What’s in a name?

The AI Safety Institute being renamed the AI Security Institute sparked debate at the time of announcement and again during LASR Lates as one audience member expressed disappointment, saying: “Transparency should be the priority.”

Simon declared: “AI safety is a major concern. The Doomsday Clock moved closer to midnight due to misinformation risks enabled by AI.”

Sarah’s perspective was measured. “The name doesn’t matter – what matters is what it does.”

And according to the Secretary of State for Science, Innovation, and Technology, Peter Kyle: “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.”

Our LASR Lates discussions will continue to travel the UK, so if you’d like an event where you are, get in touch: aisecurity@plexal.com