A new commission has been formed by Oxford University to advise world leaders on effective ways to use Artificial Intelligence (AI) and machine learning in public administration and governance.
The Oxford Commission on AI and Good Governance (OxCAIGG) will bring together academics, technology experts and policymakers to analyse the AI implementation and procurement challenges faced by governments around the world.
Led by the Oxford Internet Institute, the Commission will make recommendations on how AI–related tools can be adapted and adopted by policymakers for good governance now and in the near future.
The new Commission’s inaugural thinkpiece, “Four Principles for Integrating AI & Good Governance” by Lisa-Maria Neudert and Philip Howard examines the procurement and use of AI by government and public agencies.
The report outlines four significant challenges relating to AI development and application that need to be overcome for AI to be put to work for good governance and leverage it as a ‘force for good’ in government responses to the COVID-19 pandemic.
The working paper underscores the urgent need for inclusive design, informed procurement, purposeful implementation and persistent accountability in order to integrate AI and good governance, and to protect and even advance democracy.
The authors raised issues in relation to the need for training and specialized due diligence processes, the integration of automated decision-making into policy making, inherent bias within training data sets and the explainability of algorithms and make recommendations for research and policy priorities.
The Commission will address these questions in a series of reports in the coming months as it looks at the impact of AI on areas of government procurement and seeks to set out best practice for policy makers and government officials. These future working papers will look at the uses of AI in public service, including their development, procurement and implementation and provide evidence about the real-world impact of AI.
The OxCAIGG commissioners are Dr Yuichiro Anzai, Chair of the Council for Artificial Intelligence Strategy, and adviser to the Japanese government on strategic policy, Tom Fletcher CMG, founder of The Foundation for Opportunity and Visiting Professor at New York University, Dame Wendy Hall, Regius Professor of Computer Science at the University of Southampton and Chair of the Ada Lovelace Institute, and Professor Philip Howard, Director of the OII.
They will be joined by Sir Julian King, British diplomat and former European Commissioner for Security Union, Professor Safiya Noble, Associate Professor at the University of California, Los Angeles (UCLA), Mr Howard Rosen CBE, solicitor and former President of the Council of British Chambers of Commerce in Europe, Baroness Shields OBE, CEO, BenevolentAI and former UK Minister for Internet Safety and Security, and Professor Weixing Shen, Dean of Tsinghua University’s School of Law in Beijing, China.
Professor Philip Howard, Director of the Oxford Internet Institute, Chair of OxCAIGG and the co-author of the thinkpiece said: “AI will have an important role to play in building our post-coronavirus world. The pandemic will certainly supercharge the pressure for widespread surveillance, data collection, and the use of AI to deliver more efficient public services.”
“Innovative AI will need to be governed accordingly. Machine learning, coronavirus tracking apps, cross-platform data sets, and AI driven public health research shouldn’t pose a risk to fundamental human rights and legal safeguards,” said Philip.
The Commission’s global agenda of research and policy conversation will focus on finding effective ways to help government officials evaluate, procure and apply AI tools for the benefit of public service.
The Commission has several goals, including investigate and analyse the AI implementation challenges faced by democratic governments worldwide, identify best practices for evaluating and managing risks and benefits of the use of AI in public policy administration and governance, determine the next generation of research–driven policy guidelines needed to help public agencies implement AI and machine learning in policy decisions, and recommend specific action steps in research, practice, and policy to create an effective environment for government departments evaluating, procuring and applying AI tools for use in public service.
Coming as governments around the world grapple with the ethics and data challenges of using AI–driven tools in the provision of public service, the Commission hopes to inform the debate on how AI can be used as a force for good in the distribution of public services, without the risks of perpetuating social inequalities or causing additional public policy problems.
“Negotiating the opportunities and challenges of AI is the next frontline for diplomacy. It is in everyone’s interest that the rules catch up with the tech.” OxCAIGG Commissioner Tom Fletcher CMG said:
OxCAIGG Commissioner Sir Julian King said: “AI is an essential part of the digital plumbing of our interconnected lives. It needs to work, be transparent, and accountable. Easy to say, hard to do. But it’s vital that governments, public and private sectors, and indeed citizens get this right, if we want to live in societies that enable us to achieve our potential, while respecting our fundamental rights.”
OxCAIGG Commissioner Professor Safiya Noble said: “Now is the moment when we need to think about robust protections that should govern AI and automation, because many of these products and services are coming at the expense of various publics—particularly vulnerable and oppressed people around the world. It’s not a foregone conclusion that any of these technologies should persist indefinitely.”
OxCAIGG Commissioner Baroness Joanna Shields OBE said: “AI can be a useful ally of human intelligence augmenting our capabilities and expanding our perspectives, but ultimately, we must remain human and accountable in our decisions.”
OxCAIGG Commissioner Howard Rosen CBE said: “The debate is about trust. AI potentially enhances trust and confidence in government’s competence dealing with complex issues and eliminating human error. However, we also need to consider at what point can it become suffocating and to what extent machine driven decisions can be made with discretion and humanity, without the machines themselves falling prey to inbuilt societal bias. Ultimately, we must ensure that AI has a positive impact on democratic accountability, rather than potentially undermining public trust in government.”
OxCAIGG Commissioner Professor Weixing Shen said: “As a commissioner, I think the OxCAIGG shall give an opportunity to establish the common basis and to provide new answers to pressing questions of AI governance. It will help to better the relationship between artificial intelligence development and governance, ensuring that artificial intelligence is safe, controllable and accountable, promoting sustainable economic, social and ecological development, and jointly working to build a community with a shared future for mankind.”
OxCAIGG Secretary and co-author of the report Lisa-Maria Neudert said: “While there is excitement about the prospects for AI in public service, there is concern about the impact of AI systems on democracy. We certainly want the potential benefits of economic efficiencies and intelligent decision support systems, but without the risks of perpetuating social inequalities and losing political accountability.”
original source: The Oxford Internet Institute