Chief scientist warns government on ChatGPT
Share
Australia’s chief scientist Dr Cathy Foley is warning the federal government to work on its policymaking in regard to AI following the rise of ChatGPT.
According to the Australian Financial Review, Dr Foley, who made her comments at the annual World Economic Forum meeting in Davos, said that she expects the federal government will ask her office to prepare a report on AI and its implications.
“This is an example where the private sector has brought up a technology, it gets adopted really fast, and we haven’t been ready for it, to work out how we manage this,” she said.
Check out: Singaporean deep-tech startup sets AI hub in Victoria
ChatGPT, a “large language model” (LLM) machine-learning chatbot that can create human-like text, has opened talks among government with its impact and risks that bring complex policy implications.
Aside from the risks of plagiarism, copyright and compensation on the AI’s ability to take billions of texts from the internet, there also runs the risk of impersonation and fraud.
ChatGPT also tends to generate inaccurate and biased information that may be used for nefarious reasons.
Noting these risks, Dr Foley states that a report on the AI could greatly help the government in generating a response to the emerging challenges.
“Where the government asks me a question, I go up to the research community, get the best and brightest to help me answer that question very briefly – 1500 words flat,” she said.
“This is the information. There you are, do what you want with it. And that has been very powerful with government, being able to get flat, independent advice, which is evidence-based, to help them make decisions.”
Check out: University of Adelaide and MTX Group conduct collaborative AI research
According to Dr Foley, the federal government is capable of managing the policy and regulatory challenges arising from AI thanks to the E-Safety Commissioner as well as a report by the Human Rights Commissioner on AI ethics.
The scientist also states that tech companies should acknowledge and address the policy issues regarding AI.
“That’s what we should be doing: responsible research should always have a parallel path, which isn’t done by the people who are doing the research because they’re so excited and want to push things through,” Dr Foley said.
“People almost like a red team, saying: ‘How do we make sure that this is safe? Where do we put the boundaries of what we want, to have safeguards in place?’”
Dr Foley states that while formulating a whole range of approaches to the response towards AI will take some time, the federal government would have to learn to live with the new innovation.
Source: The Australian Financial Review. Content has been edited for style and length.
Eliza is a content producer and editor at Public Spectrum. She is an experienced writer on topics related to the government and to the public, as well as stories that uplift and improve the community.
Today’s Pick
11th Annual Aus Goverment Data Summit
April 1, 2025
7th Annual NZ Government Data Summit
May 7, 2025
3rd Public Sector Comms Week
May 14, 2025
Subscribe
We send emails,
but we do not spam
Join our mailing list to be on the front lines of healthcare , get exclusive content, and promos.
AI appointment Australia Australian boost boosts business businesses covid-19 cyber cyber attack cyber security cybersecurity data data breach data management defence Digital Education employment enhance enhances Featured Leader fraud funding government grants Healthcare infrastructure Innovation Lockdown new zealand NSW NZ online Procurement public Public Sector queensland renewable energy scams Social Media Technology telecommunications victoria
Last Viewed
Australia needs to focus on boosting defence innovation
Government and industries urged to invest in stronger cyber defences
Deakin Uni cyberattack reveals data breach of 47,000 students’ details
Australia most exposed to mobile app threats globally
EU open to help plug Australia’s capability gaps