Search
Close this search box.
Innovation News

Chief scientist warns government on ChatGPT

identicon
2 min read
Share
Chief scientist warns government on ChatGPT

Australia’s chief scientist Dr Cathy Foley is warning the federal government to work on its policymaking in regard to AI following the rise of ChatGPT. 

According to the Australian Financial Review, Dr Foley, who made her comments at the annual World Economic Forum meeting in Davos, said that she expects the federal government will ask her office to prepare a report on AI and its implications. 

“This is an example where the private sector has brought up a technology, it gets adopted really fast, and we haven’t been ready for it, to work out how we manage this,” she said. 

Check out: Singaporean deep-tech startup sets AI hub in Victoria 

ChatGPT, a “large language model” (LLM) machine-learning chatbot that can create human-like text, has opened talks among government with its impact and risks that bring complex policy implications. 

Aside from the risks of plagiarism, copyright and compensation on the AI’s ability to take billions of texts from the internet, there also runs the risk of impersonation and fraud.  

ChatGPT also tends to generate inaccurate and biased information that may be used for nefarious reasons. 

Noting these risks, Dr Foley states that a report on the AI could greatly help the government in generating a response to the emerging challenges. 

“Where the government asks me a question, I go up to the research community, get the best and brightest to help me answer that question very briefly – 1500 words flat,” she said. 

“This is the information. There you are, do what you want with it. And that has been very powerful with government, being able to get flat, independent advice, which is evidence-based, to help them make decisions.” 

Check out: University of Adelaide and MTX Group conduct collaborative AI research 

According to Dr Foley, the federal government is capable of managing the policy and regulatory challenges arising from AI thanks to the E-Safety Commissioner as well as a report by the Human Rights Commissioner on AI ethics. 

The scientist also states that tech companies should acknowledge and address the policy issues regarding AI. 

“That’s what we should be doing: responsible research should always have a parallel path, which isn’t done by the people who are doing the research because they’re so excited and want to push things through,” Dr Foley said. 

“People almost like a red team, saying: ‘How do we make sure that this is safe? Where do we put the boundaries of what we want, to have safeguards in place?’” 

Dr Foley states that while formulating a whole range of approaches to the response towards AI will take some time, the federal government would have to learn to live with the new innovation. 

Source: The Australian Financial Review. Content has been edited for style and length. 

Website | + posts

Eliza is a content producer and editor at Public Spectrum. She is an experienced writer on topics related to the government and to the public, as well as stories that uplift and improve the community.

Tags:

You Might also Like

Related Stories

Next Up