As AI continues to make headlines for its extreme use cases, opportunities and threats, it is aligning directly with the ‘Peak of Inflated Expectations’ phase of the Gartner Hype Cycle. Businesses and individuals across the world are experimenting with AI technology themselves or using AI-based tools to achieve drastic changes in their experiences, productivity, entertainment, knowledge and more. But this phase cannot and should not last forever. These expectations should soon plateau to become more realistic, based on ways the technology can be used to practically optimise productivity.
As the Australian Human Rights Commission recently pointed out, Australia has an opportunity to become a world leader in responsible and ethical AI, and AI itself is neither friend or foe. Government and corporate leaders are starting to recognise the need for this shift to more pragmatic mindsets, with many pointing to the need for AI regulations to avoid the ‘horror show’ that AI could lead to.
However, over-relying on regulations to keep Australians’ and Australian businesses’ data secure is not enough.
AI regulation is just one part of the Shadow AI solution
Just like we saw with the emergence and evolution of social media or public cloud solutions, there are threats that new technologies and platforms introduce when anyone can share confidential, private, or otherwise damaging information into the public arena or in a digital format that can be reached by third parties. Yet, regulating social media or the public cloud was not the answer.
Instead, most companies and organisations at risk of these kinds of threats would today have a social media or public cloud policy with comparable processes in place. Rather than legislated rules or regulations specific to certain industries, there is a society-wide recognition that there should be guard-rails in place for staff to understand what is and is not appropriate for them to share, as well as clear consequences for when those guard-rails are overstepped. This is a more sustainable and actionable approach that should be adopted with AI.
While government organisations and corporates around the world are starting to ban staff from using AI tools like ChatGPT, the impact of this on limiting security and other risks will be short-lived. The rate at which AI-based tools are evolving and being built means there will always be another tool around the corner that staff are wanting to try. By banning each tool as it arises, organisations will be regularly playing catch-up rather than leading productive adoption of valuable technologies.
Secure behaviours come from prioritising people and process
Tech-savvy employees are likely to find workarounds if they believe they have found an AI-based tool that genuinely makes their life and work easier. If, for example, there is a rule that no one can use their corporate email accounts to sign up for non-approved AI-based tools, some employees may switch to using their personal or burner accounts instead.
There is also the risk introduced by employees unknowingly using AI-based tools. For example, many everyday Australians would not think twice about clicking to agree to terms and conditions for tools that automatically correct their grammar, turn the audio from meetings into written transcripts, or create new headshots based on supplied imagery. In these cases, there is often little or no investigation done by the individual into where their data is stored, how it is analysed, and whether it is re-used elsewhere around the world for other purposes. Meanwhile, there is an introduced risk with each of these interactions that the personally identifiable data, contextualised content, or other sensitive information involved could be sent to international organisations for use in ways the individual never imagined.
It is this domino effect of using AI that is still broadly misunderstood by Australians and Australian businesses and, consequently, is the main reason for why Shadow AI is already emerging in most workplaces. It is introducing risk right under the noses of executives and Boards who are simultaneously trying to adopt AI throughout the organisation to drive productivity. But this risk is manageable if organisations take a practical approach to educating their teams.
While regulation will introduce some benefits, there is a greater need for training and education about what AI is, how it functions, and how to make the most of its capabilities. If all staff, from the front desk to the Chair of the Board, know how to securely use AI-based tools, they can alleviate business risk while also making the most of new and useful technologies.
Many business leaders continue to see technology as both the problem and the solution when assessing AI in the workplace. But this mindset will continue to lead to unsuccessful attempts at silver bullet ideas and any successful results will likely be short-term. The people using the AI, and the processes put in place to ensure those people are using the AI ethically, securely, and productively are just as important as the technology itself. Assessing Shadow AI’s links to technology, people, and processes collectively will be critical to preparing for the workplaces of tomorrow.