Search
Close this search box.
Fraud & Security Local Government Opinion

How disinformation dictated the Voice referendum and skews public discourse

4 min read
Share
Disinformation influence

“Radio is new, and we are learning about the effect it has on people. We learned a terrible lesson,” said Orson Welles on October 31, 1938.

The night before, Welles was on the microphone at CBS Radio, broadcasting tales from ‘Wars of the World’. But with a tinkered setting, he sold the 1898 drama novel as fact – the public spiralled into panic.

False and distorted information isn’t anything new. But our unfettered access to social media has created an ever-expanding sounding board for disinformation. Whether intentional or not, the posting of biased or misleading views has a habit of latching onto platform algorithms, attracting attention, and impressively infiltrating public belief. And it’s incredibly difficult – perhaps impossible – to squash.

This is why when issues get political, the spread of misinformation and disinformation is most profound. The lead up to Australia’s October 14 Voice to Parliament referendum was one such occasion.

The referendum saw around 60 per cent of Australians voting ‘no’ at the booths. Regardless of the result and which side you fell on, insights into how the debate played out on social media are compelling and point to the ability of misinformation to reach deep into democratic processes and threaten their resilience.

Before the voting weekend, plenty of ‘yes’ or ‘no’ chatter went on in homes and by the water cooler, but much of the polarising influence began online. This not only made it difficult for voters to gauge what was relevant and what were falsehoods masked as logic. For government and authorities, the complexity in identifying and getting ahead of malicious intent under a loud chorus of belief is an uphill battle.

Disinformation tells many tales

Although some users believe they’re spreading ‘truth’ to help others match their view of the world, acts of ‘misinformation’ begin somewhere and Australia’s referendum was ripe with ‘disinformation’.

Views espoused the referendum represented government control, communism, wasted money, to name a few. And some of the tactics weren’t dissimilar to previous online disinformation campaigns, including the Covid-19 vaccines.

Prior to the referendum, Fivecast undertook extensive analysis and found anti-government narratives commonly engaged pandemic deniers and anti-vaxxer believers. Rhetoric would link back to these ‘big pharma’ movements, tapping their set of beliefs as conspiracy to vote ‘no’.

Common rhetoric found online argued Covid-19 vaccination was about control, climate change is about taxation, and the Voice was simply a land grab agenda.

Anti-government sentiments also found communism fears laced through the ‘no’ zeitgeist. Parts of the public suggested the Prime Minister was trying to “hoodwink”, citing the referendum campaign had nothing to do with improving lives of indigenous people, and rather served to enrich “union, communist, and Aboriginal elite”.

In another disinformation crossover, ongoing movements to abolish Welcome to Country reappeared in the lead up to the referendum. Some believed voting ‘no’ would drop Welcome to Country from our national practice. Corners of social media commented they will hold Australia to the promise of abolishing Welcome to Country when the “sensible majority vote no”.

While much needed efforts are underway to combat fake news in Australia – earlier this year Labor proposed its draft misinformation and disinformation bill – the referendum is a real show of the issue at hand.

Social media algorithms make getting ahead of dangerous rhetoric a challenge, so government and national security personnel need to draw back the curtains on malicious intent before online users fall into misleading blackholes.

Fake news infiltrating global politics

But Australia isn’t alone in the matter. The deliberate intent to mislead or manipulate people by obfuscating facts, confusing issues, and creating false narratives feeds conversations all over the world. In addition to social media’s push, geopolitical tensions and foreign influence tactics are at an all-time high, which means disinformation has entered the war chiefs meeting.

In the lead-up to the invasion of Ukraine, for example, disinformation carried Russia’s propaganda efforts. Online news and social media made it easy for Russian officials to falsely accuse Ukraine of running a Nazi regime, which allowed them to defend Russian movement as a “denazification” mission. After the invasion, Russia continued these tactics through manipulated ‘fact-checking’ via pseudo fact-checking websites and channels, such as ‘Wars on Fakes’.

New to the war chiefs table is also AI. Western governments and national security organisations are now in an arms race against the AI boom, and how it can manipulate what people actually see.

We’re talking about AI-generated deepfake technology, and it’s already entered America’s 2024 presidential race. In a deepfake video, a realistic-looking Biden unleashed a cruel rant on a transgender person – the U.S. President was shown saying “you will never be a real woman.” This kind of aggressive language has great potential to skew voters, and the message wasn’t even legitimate.

Where it’s difficult to distinguish the real from the fake, influence operations can capitalise. As the US election nears in 2024, it’ll be vital authorities can protect the public from AI-powered disinformation such as intervening early, allowing louder voices to produce counter messaging, and ensuring people know how to spot fake from real.

Through assessment of engagement and spread, and understanding the aims of the disinformation source, human analysts can find the canaries in the mine and assess whether or not a disinformation campaign is at large.

While training communities in both AI literacy and media literacy will be crucial, dealing with the problem without censoring or denying free speech is essential, and that requires deep understanding of what’s malicious and what isn’t.

By its nature, disinformation is publicly available and intended to be disseminated as widely as possible – government and national security organisations need to go where people communicate, where they get their information and engage with new ideas. With a better handle on the social media zeitgeist, individuals should be confident in practicing free speech without contributing to a broader disinformation regiment.

Brenton Cooper
Chief Executive Officer at Fivecast | Website | + posts

Brenton Cooper, a Co-Founder at Fivecast, boasts over two decades of experience in technology and management roles, with a career spanning BAE Systems, Tenix Defence, Motorola, and Data to Decisions CRC. He is a seasoned leader, data scientist, and engineer, possessing expertise in Technology Management, Strategic Planning, Data Analytics, Research Leadership, and Big Data.

Next Up