![Data is at the heart of today’s government services. This is reflected in the federal government’s Data and Digital Government Strategy (the Strategy), which highlights its goal to use data, analytics, and technology to deliver simple, accessible services for people and businesses by 2030. As noted in the strategy, Australians expect personalised, integrated, and easy-to-use services from government entities they engage with. Such personalisation, especially across digital channels, is heavily dependent on data. Delivering such services becomes more effective when the data is more accurate and up-to-date. This is where real-time data comes into play. Why? Real-time data is more accurate because it is always up-to-date. This, in turn, improves the customer experience by enabling services to be more dynamic and interactive. However, because batch processing still accounts for the majority of data processing in government ranks, even the most recent data may become outdated when used to deliver government services. Engage with data in motion Batch processing is when the processing and analysis happen on a set of data that has already been stored for a period of time. This may be days, weeks, or even months, which just doesn't cut it for delivering dynamic and interactive citizen services. In recent years, data streaming has emerged as the technology that allows organizations to tap into their data in real-time in order to improve citizen engagement and experience. Event streaming, another name for data streaming, describes the continuous flow of data as it occurs. This enables true real-time processing and analysis for immediate insights. Streaming data distinguishes itself from batch processing by delivering the most up-to-date information when required. Apache Kafka, one of the most successful open source projects, is used by over 70% of Fortune 500 companies today and is well recognised as the de facto standard for data streaming. The open-source nature of Kafka lowered the entry barrier for working with streaming data, allowing companies to easily build use cases and solutions. However, as with all open-source software, there are limitations. Companies often end up spending more to efficiently manage, scale, secure, and evolve the streaming infrastructure. Why are we still using batch processing if data streaming is the future? Batch processing is still simpler to implement than stream processing, and successfully moving from batch to streaming requires a significant change to a team’s habits and processes, as well as a meaningful upfront investment. That is why Confluent has rearchitected Kafka to create a complete platform that provides a fully managed, cloud-native data streaming solution with the ability to turn data events into outcomes, enable real-time apps, and empower teams and systems to act on data instantly. Personalised for the people Confluent’s ability to utilise data as a continually updating stream of events rather than discrete snapshots means that public sector organisations can leverage data streaming to improve citizen engagement by offering personalised, data-driven services and insights. Confluent’s data streaming platform also enables real-time monitoring and analysis of government services and infrastructure, allowing public sector entities to quickly respond to critical events such as natural disasters or public health emergencies. At a more mundane level, Confluent supports data sharing and collaboration among government agencies, facilitating the seamless exchange of information to serve the public better and optimise resource allocation. And, importantly for government organisations, Confluent’s data streaming capabilities can enhance cyber security by detecting and mitigating threats in real time and safeguarding sensitive government data—a critical element in maintaining our national security. Indeed, 53% of Australian businesses surveyed in a recent Confluent study cited security and compliance awareness as the most applicable use cases for data streaming. It should come as little surprise, then, that industry analyst firm Forrester views Confluent as “an excellent fit for organisations that need to support a high-performance, scalable, multi-cloud data pipeline with extreme resilience.” Streamlining service improvement Data streaming is driving greater efficiency in more than three of four companies across Asia Pacific, according to Confluent research. Meanwhile, 65% of IT leaders polled see significant or emerging product and service benefits from data streaming. With this in mind, the potential for the government to do more with its data is clear, and personalisation is top of mind. Personalising citizen service experiences requires knowing who a customer is at any given moment. This is made possible by accessing data in motion, especially across multiple touchpoints. At the very least, this can help citizens avoid having to provide the same information over and over again as they interact with government agencies. And now, with Confluent assessed under the Australian Information Security Registered Assessors Programme (IRAP), government agencies with an Information Security Manual PROTECTED level requirement can use Confluent Cloud across Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. Australian government agencies will then be able to gather and share data across departments, offices, and agencies securely and at scale. This means even more government agencies will be able to tap data in motion to integrate information across their applications and systems in real time and reinvent employee and citizen experiences for the better.](https://publicspectrum.co/wp-content/uploads/2024/05/Confluent-Advertorial.png)
The Australian government, under the leadership of Albanese, has introduced stringent legislation in response to the escalating cyber threat crisis. This legislation seeks to tackle the problem of deep-fake pornography by making it a criminal offence to create and distribute it without consent. The strategic action serves two important goals: tackling the problem of image-based abuse and improving digital safety at a national level. The government is actively committed to addressing the growing concerns surrounding artificial intelligence technology. They are actively addressing these challenges.
The Australian government has introduced a new bill to address the issue of non-consensual, deep-fake sexually explicit material, aiming to crack down on its creation and distribution. Individuals involved in such activities will face strict penalties under the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. The legislation aims to address the issue of artificial intelligence technology manipulating media, resulting in the production of deceptive and fabricated content that appears authentic. This law has severe consequences.
Convicted individuals distributing non-consensual, deep-fake sexually explicit content may face a maximum prison sentence of six years. Sharing unauthorised content becomes a more severe offence when the person responsible for its creation is also the one sharing it. The punishment for this offence can result in a prison sentence of up to seven years. The government’s decision to impose these penalties shows their commitment to tackling this kind of abuse, which is both harmful and deeply distressing, often targeting women and girls.
This legislation is a key part of a broader effort to address gender-based violence and enhance online safety. In addition, the government has provided extra funding to support the eSafety Commissioner and has fast-tracked the evaluation of the Online Safety Act. By implementing these measures and taking a strong stance against harmful practices such as doxxing, as well as making improvements to the Privacy Act, the aim is to give all Australians, particularly women experiencing domestic and family violence, greater control over their personal information.
The Australian government has launched an age verification trial to enhance control over access to explicit online content. Addressing the abuse of artificial intelligence in the production of deceptive sexually explicit content is part of a broader effort. They are conducting trials to assess various technologies in terms of their safety, accuracy, and privacy features. The objective is to ensure that online content with age restrictions is only accessible to appropriate individuals, thereby protecting minors from potentially harmful material.
In order to gain access to certain content, individuals of all ages will be required to provide proof of age. The trial includes participants of all ages. The Australian government has made a $6.5 million commitment to support the Age Verification Trials, a significant initiative. The trials will also investigate the use of social media platforms, which currently have age restrictions for users. They prioritise enforcing these requirements to ensure the safety and security of online interactions.
Australia is making significant strides in the regulation of artificial intelligence (AI) technologies. The country acknowledges the need for comprehensive legislation to address the ethical and safety concerns associated with AI advancements. AI regulation goes beyond just targeting deepfakes and encompasses a wide range of AI applications. The government’s approach to regulating AI focuses on risk management, with a priority on addressing high-risk situations.
At the same time, it aims to facilitate the development of less risky forms of AI without imposing significant barriers. The proposed regulations prioritise testing and auditing, promote transparency, and ensure accountability. For instance, the government is currently contemplating the implementation of AI risk classifications similar to those under development in Canada and the EU. AI tools can be classified as low, medium, or high risk, and the level of responsibility increases with higher risk classifications.
Existing legislation already covers AI, including laws on privacy, consumer protection, copyright, and criminal activities. However, the government acknowledges the necessity of making potential changes to these overarching regulations and, if necessary, industry-specific laws or standards. The government is currently working on developing an AI safety standard, actively pursuing its goal alongside its other ongoing efforts. In addition, the organisation is actively exploring different methods for watermarking AI-generated content.
Industry and advocacy groups have widely supported the Australian government’s efforts to prioritise digital safety and regulate AI. The government’s initiatives have received support from several prominent organisations, such as the Australian Information Industry Association (AIIA) and Women’s Agenda. Regulatory frameworks play a key role in addressing the risks posed by advanced AI technologies, as emphasised by these groups.
It is clear that AI systems and applications are improving wellbeing and quality of life, while also boosting the economy. However, current regulatory frameworks fail to adequately address the risks associated with AI. The industry’s response demonstrates the need for implementing extra precautions for legitimate yet high-risk applications of AI.
It highlights the possible unexpected dangers that can come from using powerful ‘frontier’ models. Many have praised the government for its dedication to implementing testing, transparency, and accountability measures in high-risk settings. The industry’s response to Australian AI regulation highlights its commitment to ensuring the safe and responsible use of AI. Consistent responses from international jurisdictions are crucial for Australia to fully utilise AI.
The Australian government acknowledges that legislative measures alone are insufficient to tackle the problem of the deepfake crisis. The government is taking proactive steps to tackle the cultural issues that contribute to the spread of non-consensual explicit material. This comprehensive approach relies heavily on education programmes and public awareness campaigns. We are making efforts to educate the public about the potential misuse of AI technologies, especially deepfakes, and the associated legal and ethical ramifications.
In Australia, the government, civil society, industry, and other stakeholders are working together to assess the shortcomings in the country’s policy and legal framework concerning AI. This approach encourages a deep understanding of cultural challenges and the development of effective strategies to overcome them. To address the cultural challenge of AI regulations, a comprehensive strategy is needed. This strategy should include legislative actions, educational initiatives, public awareness campaigns, and collaboration with various stakeholders. This approach clearly demonstrates the government’s commitment to creating a safe and responsible AI environment in Australia.
By implementing its new deepfake laws, Australia has made significant progress in fighting digital exploitation. The government has implemented a comprehensive strategy that includes strict penalties, the implementation of age verification systems, and a broader focus on AI-related concerns. These measures aim to create a more secure online environment for everyone. These policies have far-reaching consequences.
They not only discourage potential wrongdoers but also provide legal options for victims. Furthermore, these initiatives showcase Australia’s commitment to addressing the ethical challenges posed by AI technologies. As technology continues to advance, it is expected that the government will adapt its approach to this issue in the future. As artificial intelligence continues to progress, the policies and regulations governing its use will also develop.
Justin Lavadia is a content producer and editor at Public Spectrum with a diverse writing background spanning various niches and formats. With a wealth of experience, he brings clarity and concise communication to digital content. His expertise lies in crafting engaging content and delivering impactful narratives that resonate with readers.