ASIC requests AI misuse legislation

Share

The financial watchdog in Australia is currently taking action against the alleged misuse of AI using existing laws. However, they believe that reforms are necessary to effectively regulate emerging technologies.

Joe Longo, the chair of the Australian Securities Investments Commission (ASIC), highlighted the need to address legislative gaps in order to effectively prevent and respond to the potential harms associated with machine learning and automated decision-making. According to him, ASIC is making efforts to ensure companies are held accountable under existing laws. “ASIC is already pursuing an action in which AI-related issues arise,” he said.

“We’re willing to test the regulatory parameters where they’re unclear or where corporations seek to exploit perceived gaps.” Longo said even though “a divide exists between our current regulatory environment and the ideal [one]… Businesses and individuals who develop and use AI are already subject to various Australian laws.”

These encompass a range of laws, including general principles that govern the responsibilities of directors under the Corporations Act, which are not limited to specific duties. Additionally, there are laws pertaining to privacy, online safety, corporations, intellectual property, and anti-discrimination that have a broad application across all sectors of the economy.

Longo said that because harms caused by “‘opaque’ AI systems” are harder to detect than traditional white-collar crime, regulations tailored to crimes committed through algorithms or AI would be more effective at preventing them.

“Even if the current laws are sufficient to punish bad actions, their ability to prevent harm might not be,” Longo said. If an AI were to engage in insider trading or market manipulation, ASIC could impose penalties under the current regulatory framework. However, having specific laws for AI would be more effective in preventing and deterring such violations, according to him.

“What if a provider lacks adequate governance or supervision of an AI investment manager?”

“When, as a system, it learns to manipulate the market by hitting stop losses, causing market drops and volatility… when there’s a lack of detection systems… Yes, our regulations around responsible outsourcing may apply, but have they prevented the harm?

“Or a provider might use the AI system to carry out some other agenda, like seeking to only support related party products or share offerings, giving some preference based on historic data.”

“There’s a need for transparency and oversight to prevent unfair practices, accidental or intended. But can our current regulatory framework ensure that happens? I’m not so sure.

“Does it prevent blind reliance on AI risk models without human oversight that can lead to underestimating risks? Does it prevent failure to consider emerging risks that the models may not have encountered during training?”

Longo emphasised the need for legislation to safeguard consumers from potential harms caused by AI. This legislation should focus on addressing the lack of transparency surrounding the use of AI, unintentional biases, the challenges of appealing automated decisions, and determining liability for any resulting damages.

“It isn’t fanciful to imagine that credit providers using AI systems to identify ‘better’ credit risks could (potentially) unfairly discriminate against vulnerable consumers. “In such a case, will that person struggling have recourse for appeal? Will they even know that AI is being used? And if they do, who’s to blame? Is it the developers? The company?

“And how would the company even go about determining whether the decision was made because of algorithmic bias, as opposed to a calculus based on broader data sets than human modelling?”

The government’s recent response to the review of the Privacy Act agreed “in principle” to enshrine “a right to request meaningful information about how automated decisions are made.” The European Union’s (EU) General Data Protection Regulation goes much further; it is illegal under Article 22 for an individual to be “subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her.”

Longo pointed out that developers and policymakers have proposed a potential solution, which involves incorporating “AI constitutions” into decision-making models. They are considering testing these models to determine if they can still generate violations of the preset rules.

“In response to these various challenges, some may suggest solutions such as red-teaming or ‘AI constitutions’—the  suggestion that AI can be better understood if it has an in-built constitution that it must follow,” Longo said.

“But even these have been shown to be vulnerable, with one team of researchers breaking through the control measures of several AI models simply by adding random characters at the end of their requests.”

Another safeguard that Longo said had been floated was mandating “AI risk assessments,” which is a measure NSW has enforced on government agencies since 2022. “But even here, questions like those I’ve already asked need to be considered to ensure the risk assessment is actually effective in preventing harm,” Longo said.