Home Affairs restricts ChatGPT use amid calls for government regulation on AI
Share
The Department of Home Affairs has blocked public servants from using ChatGPT as it waits for a whole-of-government position on AI to emerge.
At a Senate estimate early this week, Secretary of the Department of Home Affairs Mike Pezzullo expressed worry about the possibility of AI utilization in government without proper corporate authorization or oversight.
As a result, he issued an internal instruction aimed at restricting the use of ChatGPT. Secretary Pezzullo also stated that the restriction was to avoid situations where individuals make decisions to utilize ChatGPT solely for convenience as it would fail to meet the required level of security standards.
“I don’t want a permissive situation where an officer can individually decide, without any safeguards, to use this technology because it’ll make their day go faster,” he said.
While machine learning and large language models can be beneficial for government departments, it was pointed out that these technologies were usually acquired through proprietary arrangements and thus were regulated and controlled.
Related: Google calls to relax copyright law for AI info mining
While Secretary Pezzullo did not rule out the possibility of implementing a permanent block on ChatGPT usage, he stated that establishing a government framework outlining procedures and security standards for its use would be a suitable solution.
“It needs to be considered in the Australian Government along with the private sector continually evaluating emerging technologies and assessing both their potential and the risks associated with their use in the public sector,” Department of Home Affairs Chief Operating Officer Justine Saunders said.
The implementation of the block coincides with recent appeals made by Open AI CEO Sam Altman, who is responsible for the creation of ChatGPT, before the US Congress for government regulation of AI development.
Mr Altman stated that the rapid advancement of powerful technologies like AI can have harmful consequences if not subjected to appropriate regulation.
“I think if this technology goes wrong, it can go quite wrong, and we want to be quite vocal about that; we want to work with the government to prevent that from happening,” he said.
“We try to be very clear about what the downside case is and the work that we have to do to mitigate that.”
Mr Altman proposed the establishment of new, enforceable frameworks mandating AI models should meet defined safety standards, undergo evaluation by independent auditors, and successfully pass specific test models before they can be launched.
Eliza is a content producer and editor at Public Spectrum. She is an experienced writer on topics related to the government and to the public, as well as stories that uplift and improve the community.
Today’s Pick
11th Annual Aus Goverment Data Summit
April 1, 2025
7th Annual NZ Government Data Summit
May 7, 2025
3rd Public Sector Comms Week
May 14, 2025
Subscribe
We send emails,
but we do not spam
Join our mailing list to be on the front lines of healthcare , get exclusive content, and promos.
AI appointment Australia Australian boost boosts business businesses covid-19 cyber attack cybersecurity cyber security data data breach data management defence Digital employment enhance enhances fraud funding governance government grants Healthcare infrastructure Innovation Lockdown management new zealand NSW NZ online privacy public Public Sector queensland renewable energy scams security Social Media Technology telecommunications victoria
Last Viewed
Australia & NZ privacy watchdogs investigate Latitude Financial
Australian Government releases country’s first Data Strategy
Government’s looming record-keeping crisis
Australia’s first Online Safety Youth Advisory Council established
Crucial Connections