Fendr: Helping teams use generative AI without putting data at risk 

When you’ve spent years building software for public sector organisations, you recognise how critical trust, data handling and compliance really are. For Fendr founder and engineer Katie Whitlock, this reality hit home when she saw colleagues pasting sensitive information into ChatGPT.  

It was clear that in the process of trying to be more productive, employees were inadvertently turning into insider threats. They’re not alone – businesses across every sector are grappling with how to harness generative AI (gen AI) without compromising data security. 

That’s where Fendr comes in, tackling the risks of shadow AI head-on. Its thoughtful tool allows companies to balance risk of data leakage against the productivity gains of AI technology by managing what and how data is entered into unmanaged AI tools. With a real-time admin dashboard, blocking major risk vectors like copy, paste and app integrations while allowing input, helping to enforce company policies and reduce data loss. Teams can adjust policies instantly – per site and action – and the dashboard provides clear insights into AI usage with alerts for potentially risky, unsanctioned sites. 

The tool also provides key datapoints to help organisations drive AI adoption. By understanding what and how employees are using unmanaged AI tools, organisations can understand where to make investments in tooling and training to ensure the right services are available in a secure manner. 

“Currently, companies can choose between costly solutions that focus on restrictive blocking of gen AI and isolating to a particular environment or accepting the risk and allowing full access,” explains Katie. Rather than isolating gen AI tools in gated environments or banning them altogether, Fendr lets organisations set sensible guardrails in minutes and see what’s working through built-in analytics. 

Built for the way humans really work 

The core product is a light touch, easy-to-deploy solution that allows companies to encourage safe usage immediately – without the heavy implementation of a full endpoint tool. The dashboard gives privacy and security teams immediate, fine-grained control and visibility, helping them make balanced decisions and reduce risk in an informed and sensible way. 

That human-centred approach is what makes Fendr stand out. Rather than trying to outsmart the user, it partners with them – minimising friction, reducing risky interactions, and fostering better data culture at scale. 

Backed by insight and action 

Fendr is currently running proof-of-concept trials and has already gained traction through the LASR Validate programme, where the team has engaged directly with academic, industry and government leaders in AI security. 

“Our key takeaway from the programme is that so many companies and individuals are standing on the precipice, waiting to take a leap on how to best implement gen AI security and what risks are acceptable in this new era,” Katie says. “The waters are fast moving and we’re yet to meet anyone that feels they have all the answers.” And through events like AI UK in London and RSA Conference in San Francisco, Fendr is positioning itself at the centre of the growing conversation around gen AI risk and trust. 

The road ahead 

With trial partners already on board and a growing interest from industry, Fendr is eyeing the next stage: scale. Over the next three to five years, the team aims to become the go-to solution for organisations seeking to embrace gen AI tools safely and smartly. They’re actively looking for more trial partners who want to pilot the solution and help shape its future. If your organisation is exploring gen AI adoption but wants to avoid costly mistakes, Fendr is keen to talk.