The Australian Government is evaluating the possibility of imposing restrictions on “high-risk” applications of artificial intelligence (AI) and automated decision-making.
Generative AI, which involves AI generating new content like text, images, audio, and code, has gained significant popularity with programs like ChatGPT, Bard, and Bing’s chat feature. However, concerns have been raised about its potential misuse.
According to The Guardian, the National Science and Technology Council’s discussion paper warns AI can be used for harmful purposes, such as creating deep fakes to manipulate democratic processes, spreading misinformation and disinformation and even promoting self-harm.
The paper states that one of the major concerns regarding AI is algorithmic bias, which could result in favouring male candidates over females during recruitment or disproportionately targeting minority racial groups.
On a positive note, it was acknowledged that the beneficial applications of AI are already being utilized, such as the analysis of medical images, enhancing building safety, and cost-saving measures in the legal field.
Related: ‘Godfather of AI’ leaves Google, warns danger of the new tech
The NSTC report also highlighted concerns about the concentration of generative AI resources among a few large multinational technology companies, primarily based in the US, which poses risks to Australia.
The paper emphasized the government’s commitment to implementing necessary safeguards, particularly for high-risk AI applications and automated decision-making.
While it was acknowledged that the need for Australia to align its governance with major trading partners to capitalize on global AI systems and promote domestic growth, stakeholders are urged to consider the impact on Australia’s tech sector and current trading activities if a stricter approach to banning high-risk activities was adopted.
“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” Minister Husic said.
“But as I have been saying for many years, there need to be appropriate safeguards to ensure the safe and responsible use of AI.”