Exploring tricks and treats on the AI landscape – security, ROI, deepfakes and investment

Plexal has revealed a new AI Security Challenge project that you have the chance to be involved in, so express your interest here if you’d like to be part of it.

As part of our Lates event series, and in keeping with spooky season, we’ve hosted an AI-specific discussion to ask: is AI a trick or treat for businesses? 

In a conversation led by our CCO and Head of Innovation, Saj Huq, the panellists joining us at the Centre Stage of our Plexal Stratford workspace included:  

  • David Sully, CEO and Co-Founder at Advai, which aims to increase the adoption and automation of AI across all domains and businesses 
  • Joshua Walter, Partner at Osney Capital, a UK-based venture capital fund specialising in early-stage cyber security startups 
  • Dr Magda Chelly, Co-Founder at Risk Immune, which equips organisations with advanced, AI-driven risk management capabilities 

Expectation versus reality

We’re one year on since the 2023 AI Safety Summit ran on 1st and 2nd November, during which time 28 nations signed the Bletchley Declaration to design safe, human-centric AI, while international collaboration was high on the agenda.

Looking at more recent events, Saj opened the conversation by calling on panellists to offer a perspective on current adoption of AI within businesses.

David confessed to being “epically cynical,” informed by what he sees day-to-day. “In the last year, companies have had huge pressure at board level, where people would discuss ChatGPT and a desire to get progress up and running,” he said. “There’s lots of concepts and work going on but understanding what’s useful and what isn’t is really challenging.”

For example, David noted Klarna’s adoption of AI technology has seen the company workforce reduced from 5,000 to 3,800 over the past year, while the goal is to reach 2,000, as reported by the BBC.

The other change he’s witnessed over the past six months has been companies moving away from questioning the safety and security of AI, to now hearing increased interest surrounding return on investment, which a lot of people are struggling to quantify.

This aligns with findings from Lucidworks, which show 63% of global companies plan to increase AI spending in the next 12 months, compared to 93% in 2023, driven by cost and security barriers. “Things are moving so fast, this makes it a challenge for any business, especially one trying to adopt AI at scale,” David said.

Minimal risk or millions lost

From Magda’s perspective, she’s concerned with the overreliance on AI being a silver bullet for knowledge – and this is without mass-market adoption. “There’s a general perception that AI is inherently ‘smart,’ which can be misleading,” she shared.

“Many people don’t understand that AI isn’t 100% accurate – it’s just algorithms and data. Building AI tools often doesn’t align perfectly with end-user expectations, which adds complexity. AI is valuable in certain tasks where it can reduce time and increase productivity, but it’s not applicable to every use case.”

Magda went on to highlight that a key challenge here is users failing to ask services like ChatGPT the correct questions to obtain the most useful answers. “Instead of replacing human work, AI should augment it by enabling people to focus on strategic tasks.”

Ultimately, context is key. Applying AI for marketing purposes could present minimal risk, Magda suggested, whereas “using it for contracts could cost a company millions if there are errors.” 

AI security and ethical concerns

Offering insight from his role as an investor in the cyber security space, Josh reasoned that while market demand is driving adoption of AI, there’s a lot of hesitance. “Many companies are focused on risk management, cost optimisation and efficiency improvements,” he explained.  

“Security is critical, especially with threats like deepfakes, where malicious actors can easily manipulate media, posing risks for both individuals and businesses. Investing in prevention, especially in identity and trust, has become a priority.”

Magda herself has been the victim of a deepfake scam following a dalliance with TikTok. “After a week or two of posting a short video, I had people asking if I provide services for cryptocurrency recovery,” she recalled, noting how realistic the impersonation was, with the voice the only real giveaway it was a deepfake. “It’s extremely easy to do and that’s what scares me the most.”

Looking ahead at opportunities

Closing with guidance for those seeking investment, Josh shared that clarity and vision are essential. “The AI market is rapidly evolving, with companies being acquired at early stages and niche solutions emerging. To assess investments in AI, scalability and outcome specificity are essential. Nervousness exists around non-specific solutions that lack a clear use case or don’t add tangible value. 
 
“There’s exciting work in automating workflows and improving security operations. In the assurance and secops spaces, AI has the potential to streamline processes, which is particularly beneficial for roles requiring complex, time-sensitive decisions.” 

David added: “There’s a huge need for autonomous network defence as cyber threats grow. Automated security systems are critical as data traffic increases and companies capable of developing such systems have massive potential in securing networks and playing a vital role in the future of cyber security.”