Due to the growing danger, the Australian State Parliament will soon present legislation to combat the production and spread of harmful and misleading AI-generated deepfake content. A new piece of legislation tackles the use of artificial intelligence (AI) in creating violent and sexually explicit deepfake images, audio, and videos. This initiative will confront one of the most urgent challenges in digital governance and cybersecurity today.
The proposed legislation states that creating or distributing AI-generated content that depicts real individuals in a degrading or harmful manner—such as simulated violence, assault, nudity, or immoral acts—may lead to penalties of up to four years in prison or fines of up to $20,000. The new legislation addresses situations where AI-generated content mimics real individuals without their permission, focusing on safeguarding people and organisations from potential harm to their reputations and misuse of their identities.
Protecting data integrity
The proposed legislation states that creating or distributing AI-generated content that depicts real individuals in a degrading or harmful way—like simulated violence, assault, nudity, or immoral acts—can result in penalties of up to four years in prison or fines reaching $20,000. The recently introduced legislation tackles instances where AI-generated content replicates real individuals without consent, emphasising the protection of individuals and organisations from possible damage to their reputations and unauthorised use of their identities.
The legislation covers a broader scope and tackles the dangers deepfakes pose in contexts like scams, blackmail, political misinformation, and fraud. This initiative aims to protect Australians from the rising risks of AI manipulation in various sectors, particularly in online platforms, social media, and digital communications.
Strengthening digital government security
These regulations introduce a crucial advancement that safeguards the integrity of digital government operations and data management practices. AI-generated information continues to expand, and public sector organisations face significant challenges in safeguarding digital platforms, elections, and public services from manipulation and misuse. The eSafety Commissioner reports a staggering 550 percent rise in the spread of explicit deepfake content since 2019, highlighting the vulnerability of both the public and private sectors to digital manipulation.
The proposed legislation addresses these challenges by establishing regulations for AI content development and enhancing public institutions’ preparedness to confront digital threats. Being aware is crucial to preventing unauthorised access to personal data used to create fakes. Strong regulations for data governance ensure consent for data use and reduce the risks linked to deepfakes. Strong data management policies build public trust in the government’s ability to protect sensitive information.
Protecting public trust
The government recognises that deepfakes pose a serious threat that goes beyond sexual exploitation. The government recognises the increased risks that deepfakes present, which go beyond sexual exploitation. It is actively exploring broader implications, such as their effects on political disinformation and election integrity, after publishing a discussion paper earlier this year. AI-generated content challenges public trust and security during elections, as deepfakes can manipulate public sentiment and mislead voters. Investigators actively explore new regulations to tackle the misuse of deepfakes in various contexts, including fraud, extortion, and political propaganda.
The ongoing modernisation and digitisation of public sector operations make these laws crucial for defining clear parameters for AI use while upholding the integrity of digital communication. The legislation significantly impacts forthcoming policies and compliance obligations for entities managing sensitive information, including governmental bodies and private sector contractors.