Reinforcing data privacy in the AI era

Reinforcing Data Privacy

A former temporary staff member of the NSW Reconstruction Authority uploaded an Excel spreadsheet containing over 12,000 rows of sensitive information about 2,031 Northern Rivers flood victims to ChatGPT between  12 and 15 March 2025. An unauthorised upload to the public artificial intelligence platform has exposed sensitive information, including names, addresses, dates of birth, contact details, and personal health information from participants in the Northern Rivers Resilient Homes Programme. 

The Authority revealed the breach on 14 October a public holiday, after it went unnoticed for seven months. This revelation prompted urgent questions about transparency measures and the oversight of contractors within Australia’s public sector.

A third-party contractor accidentally entered government data, including personal and sensitive information, into ChatGPT while processing applications for the resilient homes programme. This incident exposes critical flaws in data governance practices within NSW public sector agencies and shows how inadequate oversight of artificial intelligence tools creates systemic cybersecurity risks.

Privacy Commissioner Sonia Minutillo emphasised the severity of the breach, stating, “This incident highlights the serious consequences and human impact of uploading government data and personal and sensitive information into public platforms like ChatGPT.”

The incident reveals major flaws in how public sector agencies manage data infrastructure and enforce data collection protocols when collaborating with contractors. The NSW Reconstruction Authority failed to establish adequate measures to prevent unauthorised data sharing via cloud storage platforms that do not comply with government-approved security frameworks. NSW public sector agencies must notify the Privacy Commissioner and inform affected individuals when they anticipate that data breaches will cause serious harm.

Commissioner Minutillo stressed the need for accountability, stating, “This information must never be entered into public platforms and agencies have a responsibility to ensure their staff, including contractors follow all applicable guidelines for the safe and responsible use of artificial intelligence platforms.”

This situation reveals the risks that arise from fragmented data management among agencies and contractors, especially when they use artificial intelligence systems without proper oversight for data governance. The contractor circumvented the established cybersecurity protocols designed to safeguard sensitive information within secure government systems. The incident affects individuals who submitted comprehensive personal details for flood recovery support through the Northern Rivers initiative.

Check out: “Data privacy-compliant AI builds public trust”  

These citizens trusted the NSW Reconstruction Authority to protect their data within secure government infrastructure, not to expose it through public AI platforms that retain uploaded information. The incident forces a comprehensive reassessment of how public sector agencies approach data collaboration with external contractors. Organisations must establish explicit protocols governing artificial intelligence use, particularly regarding what information contractors can access and which platforms they can use for data processing.

Leaders in the public sector should promptly review their data sharing agreements and ensure contractors understand the limitations on uploading government information to public platforms. Agencies need to establish strict policies that classify unauthorised data uploads to AI platforms as significant violations, requiring prompt reporting and a thorough investigation. The Information and Privacy Commission NSW confirmed that the agency conducts investigations into incidents independently, following the reporting obligations of the MNDB Scheme.

The Authority has initiated an independent review of breach identification and management protocols while Cyber Security NSW investigates risks from third-party access. Since March 2022, government agencies have required the NSW AI Assessment Framework. This breach shows major enforcement gaps in ensuring contractors follow policies that restrict uploading sensitive data to public platforms. Public sector organisations must swiftly adopt pre-approved AI platform lists and establish real-time monitoring systems to identify unauthorised data transfers before breaches occur instead of addressing them months later. 

The Authority plans to review and enhance data protection measures with ID Support NSW, providing free identity security advice and counseling for those affected. This incident establishes a clear standard: agencies must ensure temporary employees and contractors complete essential AI ethics training and understand public platform usage limitations before accessing government systems with personal information.

Content Producer at  |  + posts

Justin Lavadia is a content producer and editor at Public Spectrum with a diverse writing background spanning various niches and formats. With a wealth of experience, he brings clarity and concise communication to digital content. His expertise lies in crafting engaging content and delivering impactful narratives that resonate with readers.

Leave a Reply

Your email address will not be published. Required fields are marked *