Search
Close this search box.
Learning

The I in Artificial Intelligence: A reflection of our collective conscience

identicon
4 min read
Share
Artificial Intelligence

There’s a real danger of systematizing the discrimination we have in society [through AI technologies]. What I think we need to do – as we’re moving into this world full of invisible algorithms everywhere – is that we have to be very explicit, or have a disclaimer, about what our error rates are like.  – Timnit Gebru 

It’s been only 10,000 years since the dawn of civilisation as we know it. 

That’s when we moved from the forests onto the fields – the dawn of agriculture, cities and states, and the beginning of the notion of private property i.e. a shift from a collective ‘we’ to an individual ‘I’. 

But more importantly, it’s when we, en masse, started to relegate decision making to our trusted kings & queens and prime ministers & presidents. 

It can be argued that we’re now at the dawn of a new civilisation, as we increasingly entrust decision making to faceless algorithms. 

But how do we develop ‘trust’ in AI – how do we ensure these systems are responsible, fair and unbiased? 

When discussing bias in AI systems, what we are referring to is models that discriminate against certain groups of people – that is, models that reflect implicit human prejudice, such as race, gender, age etc. 

Even though we have a good understanding of human biases, we are only now beginning to struggle with understanding biases that can infiltrate AI systems – which are increasingly being deployed, at scale, and are becoming ever more influential in our day-to-day lives.  

A couple of common examples include risk assessment and profiling such as loan applications, visa approvals and judicial decisions, and HR assessments such as hiring and promotion tools. 

Artificial Intelligence systems are not inherently biased. In fact, they are designed to provide balanced, fair, unbiased results – so what’s all the fuss about? 

The issue is that they can AMPLIFY bias at SCALE. 

These models merely reflect inherent human bias that creeps in via the data used to train the models, and the implicit and explicit influences of those developing such models. 

One important fact to note is that the data alone is NOT responsible for producing biased models. 

For this reason, it’s not enough for those developing the models to be solely responsible for detecting and mitigating bias. It’s a much broader societal issue, that needs to be addressed via diversity and due consideration.  

As such, the field of AI needs to have broader representation from a number of different areas, including technical, legal, risk, ethics, policy, governance, human resource and representation from disadvantaged and underrepresented groups. 

We also need to be aware of feedback loops that can be created: Human bias begets data bias, which begets model bias, which begets human bias… 

Organisations that leverage any data enabled technologies – especially at scale – are entrusted by society to ensure privacy, and responsible and ethical use of the data for decision making. Explicitly dealing with bias, however, is a relatively new concept to many.  

Once bias is identified, which is the crucial first step, removing/mitigating the bias is non-trivial, and varies across applications.  

Such considerations may include questions like ,“ Do you remove the bias from the data, prior to training the models? – one question then arises, is the model still representative of the broader populations it’s trying to reflect?  

Or “Do you eliminate the bias at the point at which the model generates result/prediction/decision? – once again, will the result be representative, and if that’s not an issue, how easily will it be to identify and eliminate the bias in the first place? 

A further difficulty in creating unbiased AI systems is that there are different definitions and interpretations of what represents a fair system. For an AI system to ‘understand’ fairness, we need to encode it in a mathematical definition. It is often also non-trivial to balance potentially opposing definitions of fairness in a single model – whilst meeting accuracy and performance targets. 

So, how do we tackle such issues? 

Unfortunately, there’s no single approach that will enable us to identify and eliminate all bias from our models, and from the data we use to build such models. But, there are a number of things we can do which may include:

(1) Ask the right questions: Should we do it? Can we do it? How can we do it in an ethical, fair and responsible way? 

(2) Define clearly what ‘fair’ looks like 

(3) Understand bias and all its sources

(4) Understand the data – is it fair and representative, and what are the possible sources of bias?

(5) Evaluate bias in models, and identify each step in the process where it manifests itself

(6) Minimise/remove bias in model outputs – testing of models needs to move beyond just performance, to include quantifying levels of bias, and 

(7) Frameworks and guidelines to enable a scalable methodology to mitigate bias. 

An example of an Ethical AI Framework, designed by the Australian Federal government, can be found in the Department of Industry, Science and Energy and Resources website on AI Ethics Principles. 

There are also broader considerations that should be given to deploying models at scale, especially those with direct societal impact which can include: 

  • Is the team diverse ie reflect different opinions, thoughts and views? 
  • How can the model be attacked, abused and misused? 
  • Are there any incentives to encourage and promote the consideration and proliferation of fairness in our models? 
  • Have we considered all possible implications of the models on society and individuals? 
  • Do we need/want a fully automated system or is a human-augmented system suitable for our needs? 
  • Is the system explainable? Does it need to be? How much do we need to understand HOW it makes decisions for us to trust it? 
  • Are there relevant governance and accountability processes in place? 

All this needs to be delicately balanced with not being so heavy-handed that we stymy that crucial element which AI enables – Innovation! 

The path forward may not be trivial, but as a collective, it’s our joint responsibility to help build a fairer civilisation alongside our algorithmic cousins. 

Website | + posts

Dr Alex Antic is a trusted and experienced Data & Analytics Leader, Consultant, Advisor, and a highly sought Speaker, Trainer & Advisory Board Member.

He has 18+ years post-PhD experience and knowledge in areas that include Advanced Analytics, Machine Learning, Artificial Intelligence, Mathematics, Statistics and Quantitative Analysis, developed across multiple domains: Federal & State Government, Asset Management, Insurance, Academia, Banking (Investment and Retail) & Consulting.

Alex was recognised in 2021 as one of the Top 5 Analytics Leaders in Australia by IAPA (Institute of Analytics Professionals of Australia). He also holds several senior advisory roles across industry, government, start-ups and academia.

His qualifications include a PhD in Applied Mathematics, First Class Honours in Pure Mathematics, and a double degree in Mathematics & Computer Science.

Tags:

You Might also Like

Related Stories

Next Up