Government reinforces AI safety regulations

Share

The Albanese Government implements significant measures to improve the safety of artificial intelligence (AI) in Australia, addressing both public and industry apprehensions regarding the swift introduction of high-risk AI technologies. In September 2024, officials unveiled a series of initiatives that featured essential guardrails for high-risk AI applications, introduced new criminal laws aimed at addressing the misuse of AI, and designed a voluntary AI safety standard to assist businesses in implementing safe AI practices. 

These measures safeguard public interests by preventing the misuse of AI in critical domains like digital government and data security. These initiatives mark a notable progression in Australia’s digital policy framework, placing the nation at the leading edge of AI safety regulation. The Albanese Government commits to enhancing public safety and providing clear guidance for businesses, aiming to establish a strong, secure, and reliable AI landscape that fosters innovation while safeguarding Australian citizens and public sector information.

Strengthening AI safeguards

The Albanese Government has introduced a proposal paper aimed at addressing the increasing risks linked to AI by implementing mandatory safeguards in high-risk environments. This document presents a structured approach for overseeing the implementation of high-risk AI technologies across multiple sectors, such as digital government and data management. The proposals present ten essential safeguards that address potential risks, including data misuse, discrimination, and violation of privacy.

Ed Husic, the Minister for Industry and Science, stated, “Australians want stronger protections on AI; we’ve heard that, we’ve listened.” The government’s strategy encompasses three regulatory alternatives:

  1. Adopting Guardrails within Existing Regulatory Frameworks: This document reveals a definition of high-risk AI along with ten essential safeguards. The document outlines three regulatory approaches: modifying current regulatory frameworks, implementing new framework legislation, and establishing a new AI-specific law that applies across the economy. The government invites the public to provide input on these proposals until 4 October 2024.
  2. Voluntary AI Safety Standard: This standard is now in effect and provides practical guidance for businesses that utilise high-risk AI. It aim to provide businesses with clarity before implementing mandatory regulations, then they will revise it to conform with global best practices.
  3. New Criminal Laws to Combat Sexually Explicit Deepfakes: The government unveiled plans to establish new criminal offences that tackle the creation and distribution of non-consensual intimate images, particularly those produced by AI technology. Communications Minister Michelle Rowland stated, “These reforms send a clear message that the non-consensual sharing of intimate images is unacceptable in any form.”
  4. AI Policy for Government Use: Senator Katy Gallagher, Minister for Finance and Public Service, has unveiled a new policy on the use of AI in government operations. This policy promotes responsible and ethical adoption of AI within public services, focusing on improving efficiency and upholding public trust.

AI safety standard

The Albanese government introduced the voluntary AI safety standard, which provides a prompt framework for businesses to address the risks associated with high-risk AI applications. This standard takes a forward-thinking approach to assist companies in adopting optimal practices for AI safety ahead of the formal introduction of mandatory regulations. Ed Husic, Minister for Industry and Science, stated, “Business has called for greater clarity around using AI safely and today we’re delivering.”  

This underscores the government’s commitment to fostering trust and transparency in AI technologies. This standard provides comprehensive guidance on essential topics including data privacy, algorithmic transparency, and accountability measures. It establishes strong risk management protocols and adopts ethical AI practices that prioritise user safety and privacy. The voluntary AI safety standard establishes a clear baseline for AI operations to minimise potential harms linked to AI misuse, including biased decision-making, security breaches, and unauthorised surveillance. 

The voluntary AI safety standard reflects comparable frameworks developed in the European Union, Japan, Singapore, and the United States in accordance with international standards. This alignment guarantees that Australian businesses remain competitive and adhere to international standards while adjusting to the changing global landscape.

AI misuse measures

To address the misuse of AI, the Albanese government introduced a range of specific initiatives, prioritising the protection of individuals and improving safety within digital spaces. Implementing new criminal laws aimed at tackling the production and distribution of nonconsensual deepfakes—AI-generated images or videos that portray individuals in sexually explicit situations without their consent—represents a crucial step. These laws support a wider legislative initiative that safeguards against the misuse of AI technologies for detrimental ends, highlighting the importance of protecting privacy and personal dignity. 

Minister for Industry and Science, Ed Husic, stated that these measures demonstrate the government’s dedication to preventing AI misuse by implementing strict penalties for violators. “The creation and distribution of non-consensual deepfake content is now a criminal offence, with strict penalties to deter such behaviour,” Husic stressed, highlighting the seriousness of these actions and their effects on individuals and communities. The government’s decision to criminalise the misuse of AI establishes a robust standard that emphasises the protection of citizens’ safety and rights.

AI safety in government

The Albanese Government introduced initiatives to improve AI safety, carrying important consequences for the public sector and emphasising the need for increased security, accountability, and transparency in government functions. These measures aim to tackle the distinct challenges that AI presents to public services by creating explicit standards and regulatory structures that promote the responsible application of AI technologies throughout all tiers of government.

The government prominently highlights the implementation of new policies aimed at protecting digital operations from AI threats. These policies aim to strengthen data security by safeguarding against unauthorised access and the misuse of AI technologies within public services. Ed Husic, the Minister for Industry and Science, stated, “The Albanese Government’s actions reflect a commitment to protecting digital infrastructure and ensuring that AI-driven applications serve the public good without compromising individual privacy or security.”

The voluntary AI safety standard introduces essential guidelines for AI usage in the public sector. This standard offers a structured approach for government agencies, adhering to recognised best practices globally, and guarantees that AI applications maintain transparency and accountability. These standards allow the public sector to reduce the risks linked to AI misuse, including biassed decision-making and violations of data privacy.