To call generative AI a popular subject would be an understatement.
It’s like saying fish and chips by the seaside tastes nice, which is a disservice when the reality is that guzzling the salty, vinegar-coated British delicacy at the beach makes for a flavour sensation that sets the taste buds alight.
That to say, regardless of your personal stance on generative AI, in the tech world, it’s truly an inescapable hot topic that only seems to gain momentum by the day whether that’s in the news, on social media, or at events within the walls of an innovation workspace recently ranked among the coolest in London… Ahem.
And that’s precisely why we explored the subject of generative AI for our recent Cyber Lates evening – specifically the role of the technology within cyber security and whether it’s a threat or opportunity.
Learn more about Plexal’s cyber security focus here.
Kicking off the discussion on the night as our host and moderator, Plexal Innovation Consultant Matt Miller opened by saying: “In the past six months, we’ve seen an explosion of personal assistants enter the workplace. However, these are not your typical flesh and blood assistants.
“They don’t require a salary; they don’t require health insurance; they don’t even require water. They’re instead made up of an abstraction of zeroes and ones in the form of generative AI.”
But what is generative AI? Why is it that this subject is getting technologist tongues wagging more enthusiastically than the tail of a dog that’s seen their owner come home with a fresh box of treats?
As Matt elaborated, it’s the ability to use automation to create original content, such as text, images, videos or even sounds, with the tech democratised most notably by Chat GPT. “It’s has been made very popular recently, as everyone knows, through Open AI, a not-for-profit research institute devoted in developing AI in a responsible and safe way.”
For all of the excitement that generative AI has created in helping people in their professional and personal lives, no matter how responsible and safe the intentions, such rapid adoption and omnipresence have resulted in cyber security concerns like the risk of accelerated development of phishing campaigns.
Turning to his panel, Matt asked the question: is generative AI a threat or an opportunity for cyber – or both?
The opportunity
Taking a positive position, Sian John, Senior Director at Microsoft Security Business Development, says: “I’ll start with opportunity first because I’m an optimistic person – there really is an opportunity to overcome some of the challenges that we’ve got in security.” Sian pointed to skills shortages alongside the speed and scale of attacks, noting how the last Microsoft analysis revealed the average time from receiving a phishing email to having a device compromised was just 72 minutes.
“We’ve got to deal with that real speed of response that’s needed. So, the real opportunities if you can use some of those generative AI capabilities – using things like Microsoft Security Copilot – you can then see if you’re actually get the scale or the capability to deal with it.
“There’s obviously the malicious use of it that’s going to happen as it does with any technology. But also using some of that generative AI capability is probably the only way you’re going to detect some of these more advanced phishing attacks as they come. as it goes away from being badly written into being attacks and generative AI to do that.”
On opportunities, Sian John at Microsoft (@sbj24) says the firm is using GPT4 capabilities internally, but notes this is to augment the human being.
— Plexal Cyber (@PlexalCyber) May 25, 2023
It’s not to say humans will be replaced by zeroes and ones, it’s about the mundane part of the job being taken away #CyberLates
The threat
Providing a perspective based on his findings, Adlan Chaykin, an analyst in Control Risks’ cyber threat intelligence team, says the advent of open AI is bringing about a gateway of sorts for bad actors and lowering the threshold for entering cybercrime. “We’re seeing an increase in criminals on dark web forums and marketplaces, increasingly discussing how they can use it to improve their phishing and malware capabilities through better exploits – we’ve already seen information stealer being created with ChatGPT’s help,” he details.
Thankfully as it stands though, the technology doesn’t appear to be having an impact on those with high capabilities, such as state actors and ransomware groups. Nevertheless, Adlan cautions the technology is being weaponised for the purpose of deepfakes to spread misinformation. “In that sense, it’s something to kind of worry about now I think,” he adds.
I think, focus on what we have right now without getting distracted by things that could happen in the future.
— Plexal Cyber (@PlexalCyber) May 25, 2023
Worrying about artificial super-intelligence is like worrying about overpopulation on Mars, says Adlan Chaykin @Control_Risks#CyberLates
A united front
Giorgos Georgopoulos, CEO and co-founder of cyber threat intelligence startup Elemendar, kicks off by explaining that AI has been part of the business since it was launched six years ago and highlights that while the technology presents massive potential, there’s also a lot of hype – so it’s important to have the right tools for the task at hand.
Asked what responsibilities startups have to protect people, Giorgos offers: “Putting the onus on cyber startups is looking at probably the smallest ecosystem that could do something about it. If you think about protecting people, the consumer-facing side of things, there’s hardly anything that we can be doing within our borders.”
Touching upon a subject that Plexal is very much familiar with, as the innovation company closing the gap for collaboration between startups, government and industry, Giorgos continues: “You need to have cooperation from the government side, from the broader platforms onto which consumers exist online.”
Diving further into how the technology industry can work together, in terms of those building solutions, Giorgos implores that people consider in advance how technology may be abused ahead of time in order to be as responsible as possible. “If you don’t even make the assessment, you’re guaranteed to get it wrong,” he says. “The genie’s out the bottle – the baddies are going to use those tools, and they’re going to be very effective because worldwide, we’re an arms race in the the information space.
“This is where we, as the innovators, as a community, can help provide those defensive tools for countering the misuse or intended malicious actions.”
Want to come to our next Cyber Lates? Then keep your eyes peeled on our events page and stay tuned to our social channels @PlexalCyber on Twitter, Plexal Cyber on LinkedIn and sign up to our Cyber Community newsletter for more updates on our cyber security programmes.