LASR x OWASP: Securing the future of agentic AI 

Plexal recently hosted a joint event with OWASP, the Open Worldwide Application Security Project focused on improving the security of software, to explore one of the fastest-moving areas in technology: agentic AI. With new frameworks, real-world case studies and insights from leading voices, the event gave attendees a clear look at the security challenges shaping this space.

Why agentic AI is different

Traditional AI generates outputs. Agentic AI extends further – reasoning, planning and taking actions on our behalf. That leap introduces new risks. As OWASP board director John Sotiropoulos explained, the community is building practical guidance through initiatives like the Top 10 for LLM applications and a new Top 10 for Agentic Applications. These aim to give organisations a starting point for understanding threats such as identity spoofing, rogue agents and cascading hallucinations.

Risks in the real world

Daniel Jones from Microsoft’s AI Red Team outlined new research into “computer use agents” – systems that navigate apps and websites by analysing what’s on screen. His team has uncovered risks ranging from UI deception to identity over-delegation, where agents take privileged actions meant only for trusted users. The lesson? Agents need more than snapshots and prompts – they need sandboxes, data governance and secure-by-design at every level.

Challenges and opportunities

Plexal’s Innovation Lead, Holly Smith, and Technology Lead, William Gurney, spotlighted LASR’s current Opportunity Call, a programme for industry collaboration on agentic security challenges. These include protecting the broader agent ecosystem, securing confidential compute and retrieval-augmented generation (RAG) architectures, and developing security tooling tailored for agent infrastructure. As part of this programme companies will receive technical mentorship from Cisco’s principal engineers to support their solution development. In the coming months, LASR will look to identify further opportunity spaces for innovators to contribute solutions that make AI safer by design.

A complex supply chain

The discussion also looked at supply chain risks – where plugins, templates and registries can become entry points for attacks. Speakers emphasised that security isn’t a one-shot fix – it’s an ongoing process that needs strong identity checks, observability and collaboration across the ecosystem.

The session closed with a reminder to move beyond hype and fear. By embedding secure-by-design principles and working together, we can build trust in the next generation of AI.

Looking to engage with The Laboratory for AI Security Research? LASR will be hosting an AI security meet-up in Cardiff on Tuesday 23rd September, which you can register for here.