Digital Government News

Home Affairs restricts ChatGPT use amid calls for government regulation on AI

identicon
2 min read
Share
Home Affairs Implements restricts ChatGPT use

The Department of Home Affairs has blocked public servants from using ChatGPT as it waits for a whole-of-government position on AI to emerge. 

At a Senate estimate early this week, Secretary of the Department of Home Affairs Mike Pezzullo expressed worry about the possibility of AI utilization in government without proper corporate authorization or oversight.  

As a result, he issued an internal instruction aimed at restricting the use of ChatGPT. Secretary Pezzullo also stated that the restriction was to avoid situations where individuals make decisions to utilize ChatGPT solely for convenience as it would fail to meet the required level of security standards. 

“I don’t want a permissive situation where an officer can individually decide, without any safeguards, to use this technology because it’ll make their day go faster,” he said. 

While machine learning and large language models can be beneficial for government departments, it was pointed out that these technologies were usually acquired through proprietary arrangements and thus were regulated and controlled. 

Related: Google calls to relax copyright law for AI info mining

While Secretary Pezzullo did not rule out the possibility of implementing a permanent block on ChatGPT usage, he stated that establishing a government framework outlining procedures and security standards for its use would be a suitable solution. 

“It needs to be considered in the Australian Government along with the private sector continually evaluating emerging technologies and assessing both their potential and the risks associated with their use in the public sector,” Department of Home Affairs Chief Operating Officer Justine Saunders said. 

The implementation of the block coincides with recent appeals made by Open AI CEO Sam Altman, who is responsible for the creation of ChatGPT, before the US Congress for government regulation of AI development.  

Mr Altman stated that the rapid advancement of powerful technologies like AI can have harmful consequences if not subjected to appropriate regulation. 

“I think if this technology goes wrong, it can go quite wrong, and we want to be quite vocal about that; we want to work with the government to prevent that from happening,” he said. 

“We try to be very clear about what the downside case is and the work that we have to do to mitigate that.” 

Mr Altman proposed the establishment of new, enforceable frameworks mandating AI models should meet defined safety standards, undergo evaluation by independent auditors, and successfully pass specific test models before they can be launched. 

mp
Website | + posts

Eliza is a content producer and editor at Public Spectrum. She is an experienced writer on topics related to the government and to the public, as well as stories that uplift and improve the community.

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Stories

Next Up