RMIT cybersecurity expert tackles rising threats

Share

The advancement of AI is creating opportunities for cybercriminals to exploit Australians, and the rapid development is leading to a lack of experts to counter these threats.

Artificial intelligence is revealing numerous new routes for hackers to target vulnerable Australians with innovative scams or even by infiltrating smart devices. This fast-paced development is resulting in a shortage of skilled cybersecurity professionals to combat these threats.

Chao Chen, a key figure in Australian cybersecurity, has issued a stark warning in the wake of a concerning trend of intricate scams that have resulted in approximately $2.74 billion in losses for Australians in the last fiscal year alone.

In 2020, Australians reported scam losses of only $851 million to Scamwatch, ReportCyber, IDCARE, the Australian Financial Crimes Exchange (AFCX), and the Australian Securities and Investments Commission (ASIC), according to data from the competition watchdog. However, this figure increased during the COVID-19 pandemic, rising to $1.8 billion in 2021 and then setting a record of $3.1 billion in 2022.

Chen, who holds the position of deputy director of the Enterprise AI and Data Analytics Hub at RMIT’s College of Business and Law, states that AI has enabled significant technology to become far more personalised and responsive than ever before, simplifying daily activities. However, he cautions that this is a double-edged sword, as AI-powered tools have the capacity to automate and escalate cyberattacks and scams.

“While such incidents may have been rare initially, the increasing availability of AI tools and the sophistication of these attacks suggest a rising threat,” Chen said.

“We have already noted a marked increase in AI-enhanced phishing scams, ransomware attacks, and deepfake-related incidents in the past few years.”

The NSW Police sounded the alarm earlier this month about a disturbing scam. The scam uses AI to create fake videos or messages that impersonate a person’s loved ones or celebrities, luring individuals into sham investments. A Facebook investment scam claimed Hunter Valley resident Gary Meachen as its victim. He lost his $400,000 in life savings to the scheme, which appeared to have the backing of high-profile individuals like billionaire Elon Musk, Prime Minister Anthony Albanese, and former Prime Minister Julia Gillard.

Impersonations of local politicians were also featured in the scam. A fraudulent profile, bearing the name and image of Sunshine Coast Mayor Rosanna Natoli, tried to extract banking details from people via Messenger.

AI is being exploited to mimic people such as esteemed science communicator Karl Kruszelnicki, promoting fraudulent health scams, or featuring TV personality David Koch in clickbait-style images. An alarming trend is surfacing abroad, with the British engineering behemoth Arup falling victim to an intricate deepfake scam. This led to a Hong Kong-based employee transferring $US25 million to swindlers.

Chen highlighted that the AI tools used to fabricate these deceptive images, videos, and voices could enable the automation and scaling up of cyberattacks, making them more challenging for authorities to detect. He referred to studies indicating that AI could probe for vulnerabilities in software and networks, identifying potential access points more swiftly and accurately than the manual techniques employed by hackers.

Chen underscored another threat: the potential for hackers to abuse AI by subtly and often invisibly modifying the input data. He clarified that this manipulation could cause AI models to produce inaccurate predictions or classifications. An incident from nearly a decade ago serves as a prime example of when users manipulated Tay, a Microsoft-developed AI chatbot, causing it to publish racist, sexist remarks and allusions to Adolf Hitler.

“These adversarial examples can mislead AI systems in critical applications such as autonomous vehicles, medical diagnosis, and financial fraud detection,” Chen said. “Moreover, by analysing the outputs of an AI model, attackers can infer sensitive information about the training data.

“For instance, inverting a facial recognition model could allow hackers to reconstruct images of individuals used in the training dataset.”

The Idiap Research Institute in Switzerland recently conducted an investigation. The researchers developed a template using a ‘pre-trained geometry-aware face generation network’ and trained it on a mix of real and synthetic faces. Cybersecurity specialists, including Chen, have reported a significant rise in AI-enhanced phishing scams, ransomware attacks, and deepfake incidents across Australia in recent years.

The Australian Competition and Consumer Commission’s (ACCC) scam report for 2023 disclosed that fraudulent investment scams duped Australians, leading to a loss of an astonishing $1.3 billion in just the past year. Despite the growing awareness of scams and the potential of AI to escalate them, Chen highlighted that Australia is facing a significant’ shortage of professionals with expertise in AI technologies.