Disinformation vs misinformation: Time for a social media licence?

Disinformation vs misinformation: Time for a social media licence?

For our recent Cyber Lates meetup, which was hosted by Plexal CCO Saj Huq, we focused on the topic of disinformation. And we were thrilled to welcome panellists Elisabeth Braw, senior fellow at the American Enterprise Institute; Colin Kelly, head of applied research at Adarga, the AI software platform for defence; and Lyric Jain, CEO and co-founder of Logically, the company combatting disinformation with AI and human intelligence.

Time for an information licence to operate social media?

During the discussion, we delved into the problems surrounding disinformation and how it consistently affects our society daily. Firstly, what are disinformation and misinformation? As Elisabeth explained, disinformation is the “wilful sharing of incorrect information,” whereby the person or group releasing it intended to mislead, whereas misinformation is “accidentally incorrect.” She stated that even with good intentions, there’s a risk that information shared is false because anyone can generate and distribute it.

Offering an apt example that helped to illustrate the risk of disinformation, Elisabeth said: “We need a driving licence to drive a car, but we don’t need an information licence to operate social media.”

She added that several people died in the storming of the Capitol based on misleading information and called disinformation a hugely “corrosive issue in our democracies” and difficult to solve “without having the tools to verify information”.

Who’s responsible for solving the spread of disinformation?

Lyric said that we’ve reached a point where nation states are creating fake accounts and false narratives to draw specific individuals in with material they’d find interesting. He emphasised the problem of disinformation is “growing at an alarming rate” and the “harm caused impacts the general public, governments, public health, public safety, elections and national security.”

Colin built on that and called disinformation a “perennial question” from customers. “How can you solve disinformation from a technical point of view? [By] assisting users to try distinguishing it from regular information themselves,” he reasoned. Sorting fact from fiction means educating the user to consider whether there’s alignment with the information that’s been published – such as checking sources across both Russian and western media.

Saj asked the panel whose responsibility it is to solve the issue of disinformation. Elisabeth noted that, while “Twitter didn’t set out to rule to world of political discourse” and nor did Facebook, it’s their responsibility to keep the platforms clean. However, she then highlighted the reality of regulation by explaining that “as soon as you start regulating what is on the website, some people will say ‘you’re silencing me’.”

Elisabeth suggested a way to solve this issue would be to educate people properly using public libraries, which she believes would be the perfect institutions to offer information literacy training.

While countermeasures are essential and takedowns have their place, Lyric cautioned that organisations must be careful and find that balance to avoid the risk of becoming a censored state.

Colin pointed to the rate of technology moving so quickly alongside data input that it’s created an interesting point in time. Noting how Adarga keeps on track amidst this, he said: “We have a responsible AI committee and we’re continuously thinking about how we make sure we align ourselves to our principles, our AI and what we do.”

If you missed our event or would like to tune again, you can catch it below in full. 


Share