AI Security Challenge

The computing shift created by artificial intelligence (AI) stands to radically transform every industry and sector – becoming increasingly integral to business and society. 
 
But from a security standpoint, AI systems introduce novel vectors for attack and misuse, as well as wider challenges around reliability, stability and ethical considerations. 
 
Plexal has monitored and researched AI developments since our 2017 launch – independently and in mission alignment with partners, which has seen us help enterprises and governments adopt AI-enabled solutions, mentor startups developing AI products on our programmes, hosted community events exploring the subject and more besides. 
 
We’re pleased to have an incredibly exciting new AI security project on the horizon we’ll reveal very soon. But in the meantime, this is the moment to share your details if you’d like to be part of it.

THE OPPORTUNITY

Plexal is seeking innovative companies operating at the intersection of cyber security and artificial intelligence (AI). We’re particularly interested in organisations working on AI security systems and their components, including those focused on data privacy, model protection and secure AI development practices. 

Our goal is to enhance the security and trustworthiness of AI systems. 

We’d like to hear from innovative companies operating in the AI security market as we observe and understand the national interests in this area.  

We’re particularly interested in novel ideas and the testing of real-world solutions using relevant datasets where possible. 

Want to be a part of this mission?

Companies will have the opportunity to work on projects across four key areas of AI cyber security, including

Model Security
Protecting AI models from security threats is becoming critical as organisations increasingly deploy AI systems in production environments. We’re seeking solutions that enhance the security and robustness of AI/ML models while maintaining their performance. This includes preventing model theft and reverse engineering, defending against adversarial attacks, and ensuring model integrity across different deployment environments. Of particular interest are practical approaches that can be integrated into existing AI development and deployment pipelines. 

Data Security
As organisations build and deploy AI systems, protecting sensitive data throughout the AI lifecycle is critical for maintaining privacy, compliance, and competitive advantage. We’re seeking solutions that safeguard data throughout the AI lifecycle – from collection through to production use. This includes preventing unauthorised access to training data, ensuring compliant data processing, protecting against data extraction attacks on deployed models and managing privacy challenges posed by synthetic and generative AI data. Of particular interest are solutions that balance data privacy and security with effective AI development.

Learning Security
The training and optimisation of AI models presents critical security challenges for organisations deploying AI systems. We’re seeking solutions that ensure the integrity and security of AI training processes, protecting against tampering, manipulation, and unauthorised access during development. This includes protecting training pipelines, securing model parameters, and ensuring the robustness of training methods. Of particular interest are solutions that help organisations implement secure training practices while maintaining model performance and efficiency.

AI and Cyber Security
As AI systems become more prevalent and sophisticated, new security challenges continue to emerge at the intersection of AI and cyber security. This open challenge seeks novel and innovative solutions that address any aspect of AI security not covered by other themes. We’re particularly interested in forward-thinking approaches that tackle emerging threats, socio-technical challenges, or unexplored areas of AI security. This could include solutions for AI supply chain security, model provenance verification, deployment security, or broader societal implications of AI security. We welcome creative proposals that challenge conventional thinking and offer fresh perspectives on securing AI systems for the future.

EXPRESS YOUR INTEREST