Innovation News

Why do most Aussies not trust AI in the workplace?

identicon
2 min read
Share
Why do most Aussies not trust AI in the workplace

A study from the University of Queensland and KPMG Australia has revealed that only 40 per cent of Australians trust the use of artificial intelligence (AI) in the workplace.

After surveying over 17,000 people from 17 countries on their trust and attitudes towards AI and its use at work, KPMG Chair of Organisational Trust Professor Nicole Gillespie said that Australians are amongst the least comfortable with AI use at work, particularly for HR purposes such as monitoring, evaluating and recruiting employees. 

“Australians are more open to AI being used to automate tasks and help employees complete their work,” Professor Gillespie said. 

“In fact, they actually prefer AI involvement in managerial decision-making over sole human decision-making – the caveat is they want humans to retain control.” 

Check out: Staying On Track with AI – Is Your Agency on Board? 

Professor Gillespie also states her concern that only 43 per cent of Australians believe their employer has practices in place to support the responsible use of AI. 

KPMG Futures Lead Partner James Mabbott highlighted that people have low confidence in government, technology and commercial organisations to develop, use and govern AI in society’s best interest. 

“Organisations can build trust in their use of AI by putting in place mechanisms that demonstrate responsible use such as regularly monitoring accuracy and reliability, implementing AI codes of conduct, independent AI ethics reviews and certifications and adhering to emerging international standards,” Mabbott said. 

While many Australians recognise the benefits of AI, most are still hesitant about its implementation in the workplace. The study showed that only 44 per cent believe that the benefits of using AI outweigh the risks, and only a quarter believe AI will create more jobs. 

Two-thirds of the study’s respondents were mainly concerned about potential risks of AI such as cybersecurity and privacy breaches, manipulation and misuse, loss of jobs and deskilling, the erosion of human rights and inaccurate or biased outcomes. 

Check out: Flashpoint report shows role of open-source intelligence in the Ukraine war 

Professor Gillespie said that mitigating those risks and protecting people’s data and privacy are critical to trust in AI. 

“The survey found 70 per cent of Australians expect AI to be regulated, but only 35 per cent believe there are enough safeguards, laws and regulations in place,” she said. 

“It also found the community expects an independent regulator, rather than reliance on industry governance.” 

The study, which pre-dated the commercial release of ChatGPT, also sheds light on current understanding and awareness of AI, and who is trusted to develop, use and govern AI. 

mp
Website | + posts

Eliza is a content producer and editor at Public Spectrum. She is an experienced writer on topics related to the government and to the public, as well as stories that uplift and improve the community.

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Stories

Next Up