New data shows Australians want accountable AI

New research released by the Australian Human Rights Commission shows 46% of people in Australia are not aware that the government makes important decisions about them using artificial intelligence (AI).

Older people, those not in paid employment and lower-income earners were least aware that government uses AI to make decisions. However, people in these groups are more likely to be affected by such decision making.

The research on attitudes towards AI, which was commissioned from Essential Research for the Commission’s ongoing Human Rights and Technology project, also shows strong public demand for AI to be used accountably and transparently, so that our human rights are protected.

“We should not ‘beta test’ new technology on vulnerable groups in our society,” said Human Rights Commissioner Edward Santow.

“Where AI is used in decision making, the decisions can be harder to understand. It can also be more difficult to prove when such decisions are unlawful or unfair. We must learn lessons from ‘Robodebt’ by ensuring that AI-informed decision making is fair and accountable.

“The research we commissioned shows that people want to see new technology being used in ways that are transparent, understandable and they want to be informed about it – and that is clearly not happening because almost half of the people we polled didn’t even know it was happening.”

“Australians are not opposed to new technology, but they recognise that AI can make mistakes. Australians want laws that promote human rights and accountability to be applied rigorously to AI and other new tech.”

The overwhelming majority of people polled by the Commission (88%) want to be able to understand how AI is used on them, by being given reasons or an explanation for AI-informed decisions that affect them. Where AI is used to make a decision that may be unlawful or otherwise wrong, 87% of those polled said it was ‘very important’ or ‘quite important’ to be able to appeal that decision.

Between 41% and 48% of participants said they would have ‘a lot’ more trust in automated decisions by government if there were oversight measures put in place including human checks, limitations on personal information sharing, and stronger laws to protect people’s human rights.

/Public Release. View in full here.