Can governments convince public to trust them on AI use?

A research report has set out how governments could build public trust in AI service delivery in the face of declining trust in governments across the world.

  • Worldwide decline in public’s trust in governments
  • Report suggests public willing to trust AI with humans in the loop
  • Autonomous and semi-autonomous AI has limited support when it comes to delivering public services
  • Six recommendations regarding public input at design, deployment and access to recourse

QUT Business School Professor Kevin Desouza and Dr Gregory Dawson from Arizona State University, outlined key trust-building measures in AI use in a research report for the IBM Center for The Business of Government, in the face of worldwide waning of trust in governments.

Professor Desouza said trust in government in general had seen a steady decline over the past few years, including in Australia where 55 per cent of people said their default was to distrust something until they saw evidence of its trustworthiness.

“Despite a high level of trust in private sector AI use – think chatbots, Siri, or AI-curated social media recommendations, the public is reluctant to apply the same level of trust to governments when it comes to the use of AI for delivery of public services,” Professor Desouza said.

“After Robodebt, Queensland Health’s DNA forensic lab failures, and the Sports Rorts affair, to name a few high-profile examples of malfeasance, Australia’s governments have dented, if not lost, the trust of the public.

“Realising AI’s potential will occur only with a concerted effort to ensure that citizens trust AI systems, the government, and the government use of AI.

“Key to building public trust is the embedding and enforcement of transparency and human oversight in governments’ AI strategic plans.

“This entails governments investigating how they can generate and maintain public trust throughout the design, development and deployment of AI systems.”

Professor Desouza said discussions with senior government executives and independent research on AI in the public sector had resulted in the report’s six recommendations to support building trust in AI.

“We found acceptance of autonomous AI for general tasks, such as mailing out property valuations, was high,” he said.

“However, for specific services explicitly requested by a person with a focus on direct interaction, such as requesting a building code waiver, people were willing to accept AI with a human making the final decision, that is augmented AI.

“In general, it is better to err on the side of more human involvement in AI because governments are far less likely to run into substantial resistance if humans remain integral to the process.”

The researchers recommended:

  • Promotion of AI-human collaboration – it is vital work processes are designed to take full advantage of AI information, while ensuring that humans are in the loop so that processes and outcomes are executed responsibly.
  • Focus on justifiability – a business case for an AI system needs to be based on public value and the greater societal benefits to build legitimacy and earn trust and social licence to operate AI systems.
  • Explainability – an AI system must be able to articulate how it arrived at a particular outcome. Even with complex systems, one should be able to specify data used to train the algorithm, why the algorithm is being deployed, is fit-for-purpose and how the system is maintained.
  • Built-in contestability – when AI systems are being designed, public should be able to contest which datasets are being used, seek evidence that datasets are fit-for-purpose, and even inspect the performance data on algorithms. When systems are deployed, the public must have the right to question algorithmic decisions and the necessary administrative processes for recourse need to be readily accessible.
  • Built-in safety – a critical mechanism to build trust in AI systems is ensuring the public understands safety concerns with their use. The incident tracking database should be fully public facing with regular reporting. At minimum, the database should track the system with the problem, specific description of the problem, ways to know why the AI is incorrect, when the system was last audited, and last updated.
  • Stability – how algorithms provide consistent responses over time and across cases. A stable algorithm is fair and unbiased and can meet the demands of varying cases and interaction modes. When conditions change, the AI should change as well.

“Given the frantic pace of AI development, government has a responsibility to be more proactive around the design, development, and deployment of AI systems to advance national goals,” Dr Desouza said.

/University Release. View in full here.