The Office of the Australian Information Commissioner and CSIRO Data61 unveiled the revised De-Identification Decision-Making Framework. This framework helps public sector agencies and businesses across Australia share or release data responsibly while effectively minimising privacy risks. The framework launched in 2017 and received an update on 28 August 2025. It incorporates global best practices tailored to local circumstances, using insights from the Australian Bureau of Statistics and the Australian Institute for Health and Welfare.
The OAIC clarifies that it targets organisations managing personal information and considering sharing or disclosing it to meet their ethical responsibilities and legal obligations under the Privacy Act 1988 (Cth). The initiative strengthens Digital Government efforts by enabling secure data sharing, improving data governance, reducing cybersecurity risks, and building trust in artificial intelligence and cloud storage solutions.
Key facts
- This framework, established in 2017, draws on the UK’s anonymisation decision-making framework and incorporates insights from ABS and AIHW. It continues to serve as the leading model for de-identification. The ongoing application teaches agencies how to include privacy rules in digital government projects, like health research, education reporting, and transport planning, while protecting people’s identities.
- The OAIC confirms that the guide serves organisations that manage personal information and consider sharing or disclosing it to meet their ethical duties and legal obligations under the Privacy Act 1988 (Cth). It highlights that executives must prioritise compliance; the framework reduces legal risks and enhances data-driven service delivery.
- The OAIC stresses that assessing re-identification risk requires context. Data qualifies as ‘de-identified’ when the likelihood of re-identification in the data access environment remains extremely low, showing no reasonable chance of re-identification. This mandate requires agencies to assess technical measures in real-world scenarios, like limiting the reuse of sensitive datasets in AI development without proper safeguards in place.
Agencies navigate a three-stage process with the framework’s guidance. First, conduct a data situation audit. During this process, custodians articulate their data circumstances and understand their legal obligations. Identify your stakeholders and plan your communication strategy. The second aspect focuses on assessing and managing risks. It requires technical strategies such as aggregation, redaction, or hashing, along with contractual safeguards to tackle potential re-identification threats. The third aspect emphasises impact management, which involves strategic planning for stakeholder engagement, crisis responses, and continuous monitoring to build trust and accountability.
Check out: “Victoria bolsters workplace data protection”
On 1 August 2025, the Asia-Pacific Privacy Authorities released the Guide to Getting Started with Anonymisation, enhancing Australia’s framework and ensuring regional consistency. This strategic alignment of resources ensures that public agencies meet national commitments and address international expectations.
This framework plays a crucial role in the field of artificial intelligence. The OAIC decided that using de-identified patient data for AI model training does not count as personal information for I-MED. The team implemented technical measures like timeshifting and aggregation, along with strict contractual safeguards, to effectively minimise the risk of reidentification at a very low level. This statement demonstrates that AI and machine learning can use properly managed de-identified datasets while adhering to privacy regulations.
This framework enables agencies to manage Cloud Storage effectively, build robust Data infrastructure, and ensure that advances in AI and Digital Government meet ethical and legal standards. This initiative breaks down obstacles created by data silos, enables secure data collaboration, and incorporates privacy into governance and cybersecurity practices.
The De-Identification Decision-Making Framework helps Australian agencies structure their approach to reduce privacy risks and effectively use information in Digital Government. Leaders implement organised methodologies, such as data situation audits, contextual risk analysis, and impact management, to oversee data collection, cloud storage, and collaboration. This approach fosters innovation while safeguarding individual privacy.
The OAIC’s recent review of de-identified health datasets for AI training shows that effective management of controls drives the secure advancement of artificial intelligence while meeting the requirements of the Privacy Act 1988. The framework aligns with the Asia-Pacific Privacy Authorities’ 2025 anonymisation guidance, positioning Australia within a broader regional standard. In the future, organisations that adopt these strategies will strengthen their data infrastructure, improve cybersecurity, and break down data silos, all while maintaining public trust in privacy and data governance.
Justin Lavadia is a content producer and editor at Public Spectrum with a diverse writing background spanning various niches and formats. With a wealth of experience, he brings clarity and concise communication to digital content. His expertise lies in crafting engaging content and delivering impactful narratives that resonate with readers.
- Justin Lance Marcel Lavadia
- Justin Lance Marcel Lavadia
- Justin Lance Marcel Lavadia
- Justin Lance Marcel Lavadia
